sourceName
stringclasses 1
value | url
stringlengths 52
145
| action
stringclasses 1
value | body
stringlengths 0
60.5k
| format
stringclasses 1
value | metadata
dict | title
stringlengths 5
125
| updated
stringclasses 3
values |
---|---|---|---|---|---|---|---|
devcenter | https://www.mongodb.com/developer/languages/rust/rust-quickstart-aggregation | created | # Getting Started with Aggregation Pipelines in Rust
MongoDB's aggregation pipelines are one of its most powerful features. They allow you to write expressions, broken down into a series of stages, which perform operations including aggregation, transformations, and joins on the data in your MongoDB databases. This allows you to do calculations and analytics across documents and collections within your MongoDB database.
## Prerequisites
This quick start is the second in a series of Rust posts. I *highly* recommend you start with my first post, Basic MongoDB Operations in Rust, which will show you how to get set up correctly with a free MongoDB Atlas database cluster containing the sample data you'll be working with here. Go read it and come back. I'll wait. Without it, you won't have the database set up correctly to run the code in this quick start guide.
In summary, you'll need:
- An up-to-date version of Rust. I used 1.49, but any recent version should work well.
- A code editor of your choice. I recommend VS Code with the Rust Analyzer extension.
- A MongoDB cluster containing the `sample_mflix` dataset. You can find instructions to set that up in the first blog post in this series.
## Getting Started
MongoDB's aggregation pipelines are very powerful and so they can seem a little overwhelming at first. For this reason, I'll start off slowly. First, I'll show you how to build up a pipeline that duplicates behaviour that you can already achieve with MongoDB's `find()` method, but instead using an aggregation pipeline with `$match`, `$sort`, and `$limit` stages. Then, I'll show how to make queries that go beyond what can be done with `find`, demonstrating using `$lookup` to include related documents from another collection. Finally, I'll put the "aggregation" into "aggregation pipeline" by showing you how to use `$group` to group together documents to form new document summaries.
>All of the sample code for this quick start series can be found on GitHub. I recommend you check it out if you get stuck, but otherwise, it's worth following the tutorial and writing the code yourself!
All of the pipelines in this post will be executed against the sample_mflix database's `movies` collection. It contains documents that look like this (I'm showing you what they look like in Python, because it's a little more readable than the equivalent Rust struct):
``` python
{
'_id': ObjectId('573a1392f29313caabcdb497'),
'awards': {'nominations': 7,
'text': 'Won 1 Oscar. Another 2 wins & 7 nominations.',
'wins': 3},
'cast': 'Janet Gaynor', 'Fredric March', 'Adolphe Menjou', 'May Robson'],
'countries': ['USA'],
'directors': ['William A. Wellman', 'Jack Conway'],
'fullplot': 'Esther Blodgett is just another starry-eyed farm kid trying to '
'break into the movies. Waitressing at a Hollywood party, she '
'catches the eye of alcoholic star Norman Maine, is given a test, '
'and is caught up in the Hollywood glamor machine (ruthlessly '
'satirized). She and her idol Norman marry; but his career '
'abruptly dwindles to nothing',
'genres': ['Drama'],
'imdb': {'id': 29606, 'rating': 7.7, 'votes': 5005},
'languages': ['English'],
'lastupdated': '2015-09-01 00:55:54.333000000',
'plot': 'A young woman comes to Hollywood with dreams of stardom, but '
'achieves them only with the help of an alcoholic leading man whose '
'best days are behind him.',
'poster': 'https://m.media-amazon.com/images/M/MV5BMmE5ODI0NzMtYjc5Yy00MzMzLTk5OTQtN2Q3MzgwOTllMTY3XkEyXkFqcGdeQXVyNjc0MzMzNjA@._V1_SY1000_SX677_AL_.jpg',
'rated': 'NOT RATED',
'released': datetime.datetime(1937, 4, 27, 0, 0),
'runtime': 111,
'title': 'A Star Is Born',
'tomatoes': {'critic': {'meter': 100, 'numReviews': 11, 'rating': 7.4},
'dvd': datetime.datetime(2004, 11, 16, 0, 0),
'fresh': 11,
'lastUpdated': datetime.datetime(2015, 8, 26, 18, 58, 34),
'production': 'Image Entertainment Inc.',
'rotten': 0,
'viewer': {'meter': 79, 'numReviews': 2526, 'rating': 3.6},
'website': 'http://www.vcientertainment.com/Film-Categories?product_id=73'},
'type': 'movie',
'writers': ['Dorothy Parker (screen play)',
'Alan Campbell (screen play)',
'Robert Carson (screen play)',
'William A. Wellman (from a story by)',
'Robert Carson (from a story by)'],
'year': 1937}
```
There's a lot of data there, but I'll be focusing mainly on the `_id`, `title`, `year`, and `cast` fields.
## Your First Aggregation Pipeline
Aggregation pipelines are executed by the mongodb module using a Collection's [aggregate() method.
The first argument to `aggregate()` is a sequence of pipeline stages to be executed. Much like a query, each stage of an aggregation pipeline is a BSON document. You'll often create these using the `doc!` macro that was introduced in the previous post.
An aggregation pipeline operates on *all* of the data in a collection. Each stage in the pipeline is applied to the documents passing through, and whatever documents are emitted from one stage are passed as input to the next stage, until there are no more stages left. At this point, the documents emitted from the last stage in the pipeline are returned to the client program, as a cursor, in a similar way to a call to `find()`.
Individual stages, such as `$match`, can act as a filter, to only pass through documents matching certain criteria. Other stage types, such as `$project`, `$addFields`, and `$lookup`, will modify the content of individual documents as they pass through the pipeline. Finally, certain stage types, such as `$group`, will create an entirely new set of documents based on the documents passed into it taken as a whole. None of these stages change the data that is stored in MongoDB itself. They just change the data before returning it to your program! There *are* stages, like $out, which can save the results of a pipeline back into MongoDB, but I won't be covering it in this quick start.
I'm going to assume that you're working in the same environment that you used for the last post, so you should already have the mongodb crate configured as a dependency in your `Cargo.toml` file, and you should have a `.env` file containing your `MONGODB_URI` environment variable.
### Finding and Sorting
First, paste the following into your Rust code:
``` rust
// Load the MongoDB connection string from an environment variable:
let client_uri =
env::var("MONGODB_URI").expect("You must set the MONGODB_URI environment var!");
// An extra line of code to work around a DNS issue on Windows:
let options =
ClientOptions::parse_with_resolver_config(&client_uri, ResolverConfig::cloudflare())
.await?;
let client = mongodb::Client::with_options(options)?;
// Get the 'movies' collection from the 'sample_mflix' database:
let movies = client.database("sample_mflix").collection("movies");
```
The above code will provide a `Collection` instance called `movie_collection`, which points to the `movies` collection in your database.
Here is some code which creates a pipeline, executes it with `aggregate`, and then loops through and prints the detail of each movie in the results. Paste it into your program.
``` rust
// Usually implemented outside your main function:
#derive(Deserialize)]
struct MovieSummary {
title: String,
cast: Vec,
year: i32,
}
impl fmt::Display for MovieSummary {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"{}, {}, {}",
self.title,
self.cast.get(0).unwrap_or(&"- no cast -".to_owned()),
self.year
)
}
}
// Inside main():
let pipeline = vec![
doc! {
// filter on movie title:
"$match": {
"title": "A Star Is Born"
}
},
doc! {
// sort by year, ascending:
"$sort": {
"year": 1
}
},
];
// Look up "A Star is Born" in ascending year order:
let mut results = movies.aggregate(pipeline, None).await?;
// Loop through the results, convert each of them to a MovieSummary, and then print out.
while let Some(result) = results.next().await {
// Use serde to deserialize into the MovieSummary struct:
let doc: MovieSummary = bson::from_document(result?)?;
println!("* {}", doc);
}
```
This pipeline has two stages. The first is a [$match stage, which is similar to querying a collection with `find()`. It filters the documents passing through the stage based on a read operation query. Because it's the first stage in the pipeline, its input is all of the documents in the `movie` collection. The query for the `$match` stage filters on the `title` field of the input documents, so the only documents that will be output from this stage will have a title of "A Star Is Born."
The second stage is a $sort stage. Only the documents for the movie "A Star Is Born" are passed to this stage, so the result will be all of the movies called "A Star Is Born," now sorted by their year field, with the oldest movie first.
Calls to aggregate() return a cursor pointing to the resulting documents. Cursor implements the Stream trait. The cursor can be looped through like any other stream, as long as you've imported StreamExt, which provides the `next()` method. The code above loops through all of the returned documents and prints a short summary, consisting of the title, the first actor in the `cast` array, and the year the movie was produced.
Executing the code above results in:
``` none
* A Star Is Born, Janet Gaynor, 1937
* A Star Is Born, Judy Garland, 1954
* A Star Is Born, Barbra Streisand, 1976
```
### Refactoring the Code
It is possible to build up whole aggregation pipelines as a single data structure, as in the example above, but it's not necessarily a good idea. Pipelines can get long and complex. For this reason, I recommend you build up each stage of your pipeline as a separate variable, and then combine the stages into a pipeline at the end, like this:
``` rust
// Match title = "A Star Is Born":
let stage_match_title = doc! {
"$match": {
"title": "A Star Is Born"
}
};
// Sort by year, ascending:
let stage_sort_year_ascending = doc! {
"$sort": { "year": 1 }
};
// Now the pipeline is easier to read:
let pipeline = vec stage.
The **modified and new** code looks like this:
``` rust
// Sort by year, descending:
let stage_sort_year_descending = doc! {
"$sort": {
"year": -1
}
};
// Limit to 1 document:
let stage_limit_1 = doc! { "$limit": 1 };
let pipeline = vec stage.
I'll show you how to obtain related documents from another collection, and embed them in the documents from your primary collection.
First, modify the definition of the `MovieSummary` struct so that it has a `comments` field, loaded from a `related_comments` BSON field. Define a `Comment` struct that contains a subset of the data contained in a `comments` document.
``` rust
#derive(Deserialize)]
struct MovieSummary {
title: String,
cast: Vec,
year: i32,
#[serde(default, rename = "related_comments")]
comments: Vec,
}
#[derive(Debug, Deserialize)]
struct Comment {
email: String,
name: String,
text: String,
}
```
Next, create a new pipeline from scratch, and start with the following:
``` rust
// Look up related documents in the 'comments' collection:
let stage_lookup_comments = doc! {
"$lookup": {
"from": "comments",
"localField": "_id",
"foreignField": "movie_id",
"as": "related_comments",
}
};
// Limit to the first 5 documents:
let stage_limit_5 = doc! { "$limit": 5 };
let pipeline = vec![
stage_lookup_comments,
stage_limit_5,
];
let mut results = movies.aggregate(pipeline, None).await?;
// Loop through the results and print a summary and the comments:
while let Some(result) = results.next().await {
let doc: MovieSummary = bson::from_document(result?)?;
println!("* {}, comments={:?}", doc, doc.comments);
}
```
The stage I've called `stage_lookup_comments` is a `$lookup` stage. This `$lookup` stage will look up documents from the `comments` collection that have the same movie id. The matching comments will be listed as an array in a BSON field named `related_comments`, with an array value containing all of the comments that have this movie's `_id` value as `movie_id`.
I've added a `$limit` stage just to ensure that there's a reasonable amount of output without being overwhelming.
Now, execute the code.
>
>
>You may notice that the pipeline above runs pretty slowly! There are two reasons for this:
>
>- There are 23.5k movie documents and 50k comments.
>- There's a missing index on the `comments` collection. It's missing on purpose, to teach you about indexes!
>
>I'm not going to show you how to fix the index problem right now. I'll write about that in a later post in this series, focusing on indexes. Instead, I'll show you a trick for working with slow aggregation pipelines while you're developing.
>
>Working with slow pipelines is a pain while you're writing and testing the pipeline. *But*, if you put a temporary `$limit` stage at the *start* of your pipeline, it will make the query faster (although the results may be different because you're not running on the whole dataset).
>
>When I was writing this pipeline, I had a first stage of `{ "$limit": 1000 }`.
>
>When you have finished crafting the pipeline, you can comment out the first stage so that the pipeline will now run on the whole collection. **Don't forget to remove the first stage, or you're going to get the wrong results!**
>
>
The aggregation pipeline above will print out summaries of five movie documents. I expect that most or all of your movie summaries will end with this: `comments=[]`.
### Matching on Array Length
If you're *lucky*, you may have some documents in the array, but it's unlikely, as most of the movies have no comments. Now, I'll show you how to add some stages to match only movies which have more than two comments.
Ideally, you'd be able to add a single `$match` stage which obtained the length of the `related_comments` field and matched it against the expression `{ "$gt": 2 }`. In this case, it's actually two steps:
- Add a field (I'll call it `comment_count`) containing the length of the `related_comments` field.
- Match where the value of `comment_count` is greater than two.
Here is the code for the two stages:
``` rust
// Calculate the number of comments for each movie:
let stage_add_comment_count = doc! {
"$addFields": {
"comment_count": {
"$size": "$related_comments"
}
}
};
// Match movie documents with more than 2 comments:
let stage_match_with_comments = doc! {
"$match": {
"comment_count": {
"$gt": 2
}
}
};
```
The two stages go after the `$lookup` stage, and before the `$limit` 5 stage:
``` rust
let pipeline = vec![
stage_lookup_comments,
stage_add_comment_count,
stage_match_with_comments,
limit_5,
]
```
While I'm here, I'm going to clean up the output of this code to format the comments slightly better:
``` rust
let mut results = movies.aggregate(pipeline, None).await?;
// Loop through the results and print a summary and the comments:
while let Some(result) = results.next().await {
let doc: MovieSummary = bson::from_document(result?)?;
println!("* {}", doc);
if doc.comments.len() > 0 {
// Print a max of 5 comments per movie:
for comment in doc.comments.iter().take(5) {
println!(
" - {} <{}>: {}",
comment.name,
comment.email,
comment.text.chars().take(60).collect::(),
);
}
} else {
println!(" - No comments");
}
}
```
*Now* when you run this code, you should see something more like this:
``` none
* Midnight, Claudette Colbert, 1939
- Sansa Stark : Error ex culpa dignissimos assumenda voluptates vel. Qui inventore
- Theon Greyjoy : Animi dolor minima culpa sequi voluptate. Possimus necessitatibu
- Donna Smith : Et esse nulla ducimus tempore aliquid. Suscipit iste dignissimos v
```
It's good to see Sansa Stark from Game of Thrones really knows her Latin, isn't it?
Now I've shown you how to work with lookups in your pipelines, I'll show you how to use the `$group` stage to do actual *aggregation*.
## Grouping Documents with `$group`
I'll start with a new pipeline again.
The `$group` stage is one of the more difficult stages to understand, so I'll break this down slowly.
Start with the following code:
``` rust
// Define a struct to hold grouped data by year:
#[derive(Debug, Deserialize)]
struct YearSummary {
_id: i32,
#[serde(default)]
movie_count: i64,
#[serde(default)]
movie_titles: Vec,
}
// Some movies have "year" values ending with 'è'.
// This stage will filter them out:
let stage_filter_valid_years = doc! {
"$match": {
"year": {
"$type": "number",
}
}
};
/*
* Group movies by year, producing 'year-summary' documents that look like:
* {
* '_id': 1917,
* }
*/
let stage_group_year = doc! {
"$group": {
"_id": "$year",
}
};
let pipeline = vec![stage_filter_valid_years, stage_group_year];
// Loop through the 'year-summary' documents:
let mut results = movies.aggregate(pipeline, None).await?;
// Loop through the yearly summaries and print their debug representation:
while let Some(result) = results.next().await {
let doc: YearSummary = bson::from_document(result?)?;
println!("* {:?}", doc);
}
```
In the `movies` collection, some of the years contain the "è" character. This database has some messy values in it. In this case, there's only a small handful of documents, and I think we should just remove them, so I've added a `$match` stage that filters out any documents with a `year` that's not numeric.
Execute this code, and you should see something like this:
``` none
* YearSummary { _id: 1959, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1980, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1977, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1933, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1998, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1922, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1948, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1965, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1950, movie_count: 0, movie_titles: [] }
* YearSummary { _id: 1968, movie_count: 0, movie_titles: [] }
...
```
Each line is a document emitted from the aggregation pipeline. But you're not looking at *movie* documents anymore. The `$group` stage groups input documents by the specified `_id` expression and outputs one document for each unique `_id` value. In this case, the expression is `$year`, which means one document will be emitted for each unique value of the `year` field. Each document emitted can (and usually will) also contain values generated from aggregating data from the grouped documents. Currently, the YearSummary documents are using the default values for `movie_count` and `movie_titles`. Let's fix that.
Change the stage definition to the following:
``` rust
let stage_group_year = doc! {
"$group": {
"_id": "$year",
// Count the number of movies in the group:
"movie_count": { "$sum": 1 },
}
};
```
This will add a `movie_count` field, containing the result of adding `1` for every document in the group. In other words, it counts the number of movie documents in the group. If you execute the code now, you should see something like the following:
``` none
* YearSummary { _id: 2005, movie_count: 758, movie_titles: [] }
* YearSummary { _id: 1999, movie_count: 542, movie_titles: [] }
* YearSummary { _id: 1943, movie_count: 36, movie_titles: [] }
* YearSummary { _id: 1926, movie_count: 9, movie_titles: [] }
* YearSummary { _id: 1935, movie_count: 40, movie_titles: [] }
* YearSummary { _id: 1966, movie_count: 116, movie_titles: [] }
* YearSummary { _id: 1971, movie_count: 116, movie_titles: [] }
* YearSummary { _id: 1952, movie_count: 58, movie_titles: [] }
* YearSummary { _id: 2013, movie_count: 1221, movie_titles: [] }
* YearSummary { _id: 1912, movie_count: 2, movie_titles: [] }
...
```
There are a number of [accumulator operators, like `$sum`, that allow you to summarize data from the group. If you wanted to build an array of all the movie titles in the emitted document, you could add `"movie_titles": { "$push": "$title" },` to the `$group` stage. In that case, you would get `YearSummary` instances that look like this:
``` none
* YearSummary { _id: 1986, movie_count: 206, movie_titles: "Defense of the Realm", "F/X", "Mala Noche", "Witch from Nepal", ... ]}
```
Add the following stage to sort the results:
``` rust
let stage_sort_year_ascending = doc! {
"$sort": {"_id": 1}
};
let pipeline = vec! [
stage_filter_valid_years, // Match numeric years
stage_group_year,
stage_sort_year_ascending, // Sort by year (which is the unique _id field)
]
```
Note that the `$match` stage is added to the start of the pipeline, and the `$sort` is added to the end. A general rule is that you should filter documents out early in your pipeline, so that later stages have fewer documents to deal with. It also ensures that the pipeline is more likely to be able to take advantages of any appropriate indexes assigned to the collection.
>Remember, all of the sample code for this quick start series can be found [on GitHub.
Aggregations using `$group` are a great way to discover interesting things about your data. In this example, I'm illustrating the number of movies made each year, but it would also be interesting to see information about movies for each country, or even look at the movies made by different actors.
## What Have You Learned?
You've learned how to construct aggregation pipelines to filter, group, and join documents with other collections. You've hopefully learned that putting a `$limit` stage at the start of your pipeline can be useful to speed up development (but should be removed before going to production). You've also learned some basic optimization tips, like putting filtering expressions towards the start of your pipeline instead of towards the end.
As you've gone through, you'll probably have noticed that there's a *ton* of different stage types, operators, and accumulator operators. Learning how to use the different components of aggregation pipelines is a big part of learning to use MongoDB effectively as a developer.
I love working with aggregation pipelines, and I'm always surprised at what you can do with them!
## Next Steps
Aggregation pipelines are super powerful, and because of this, they're a big topic to cover. Check out the full documentation to get a better idea of their full scope.
MongoDB University also offers a *free* online course on The MongoDB Aggregation Framework.
Note that aggregation pipelines can also be used to generate new data and write it back into a collection, with the $out stage.
MongoDB provides a *free* GUI tool called Compass. It allows you to connect to your MongoDB cluster, so you can browse through databases and analyze the structure and contents of your collections. It includes an aggregation pipeline builder which makes it easier to build aggregation pipelines. I highly recommend you install it, or if you're using MongoDB Atlas, use its similar aggregation pipeline builder in your browser. I often use them to build aggregation pipelines, and they include export buttons which will export your pipeline as Python code (which isn't too hard to transform into Rust).
I don't know about you, but when I was looking at some of the results above, I thought to myself, "It would be fun to visualise this with a chart." MongoDB provides a hosted service called Charts which just *happens* to take aggregation pipelines as input. So, now's a good time to give it a try! | md | {
"tags": [
"Rust",
"MongoDB"
],
"pageDescription": "Query, group, and join data in MongoDB using aggregation pipelines with Rust.",
"contentType": "Quickstart"
} | Getting Started with Aggregation Pipelines in Rust | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/email-password-authentication-react | created | # Implement Email/Password Authentication in React
> **Note:** GraphQL is deprecated. Learn more.
Welcome back to our journey building a full stack web application with MongoDB Atlas App Services, GraphQL, and React!
In the first part of the series, we configured the email/password authentication provider in our backend App Service. In this second article, we will integrate the authentication into a web application built with React. We will write only a single line of server-side code and let the App Service handle the rest!
We will also build the front end of our expense management application, Expengo, using React. By the end of today’s tutorial, we will have the following web application:
## Set up your React web application
Make sure you have Node.js and npm installed on your machine. You can check if they’re correctly set up by running the following commands in your terminal emulator:
```sh
node -v
npm -v
```
### Create the React app
Let’s create a brand new React app. Launch your terminal and execute the following command, where “expengo” will be the name of our app:
```sh
npx create-react-app expengo -y
```
The process may take up to a minute to complete. After it’s finished, navigate to your new project:
```sh
cd expengo
```
### Add required dependencies
Next, we’ll install the Realm Web SDK. The SDK enables browser-based applications to access data stored in MongoDB Atlas and interact with Atlas App Services like Functions, authentication, and GraphQL.
```
npm install realm-web
```
We’ll also install a few other npm packages to make our lives easier:
1. React-router-dom to manage navigation in our app:
```
npm install react-router-dom
```
1. Material UI to help us build beautiful components without writing a lot of CSS:
```
npm install @mui/material @emotion/styled @emotion/react
```
### Scaffold the application structure
Finally, let’s create three new directories with a few files in them. To do that, we’ll use the shell. Feel free to use a GUI or your code editor if you prefer.
```sh
(cd src/ && mkdir pages/ contexts/ realm/)
(cd src/pages && touch Home.page.js PrivateRoute.page.js Login.page.js Signup.page.js)
(cd src/contexts && touch user.context.js)
(cd src/realm && touch constants.js)
```
Open the expengo directory in your code editor. The project directory should have the following structure:
```
├── README.md
└──node_modules/
├── …
├── package-lock.json
├── package.json
└── public/
├── …
└──src/
└──contexts/
├──user.context.js
└──pages/
├──Home.page.js
├──PrivateRoute.page.js
├──Login.page.js
├──Signup.page.js
└── realm/
├──constants.js
├── App.css
├── App.js
├── App.test.js
├── index.css
├── index.js
├── logo.svg
├── reportWebVitals.js
└── setupTests.js
```
## Connect your React app with App Services and handle user management
In this section, we will be creating functions and React components in our app to give our users the ability to log in, sign up, and log out.
* Start by copying your App Services App ID:
Now open this file: `./src/realm/constants.js`
Paste the following code and replace the placeholder with your app Id:
```js
export const APP_ID = "<-- Your App ID -->";
```
### Create a React Context for user management
Now we will add a new React Context on top of all our routes to get access to our user’s details, such as their profile and access tokens. Whenever we need to call a function on a user’s behalf, we can easily do that by consuming this React Context through child components.
The following code also implements functions that will do all the interactions with our Realm Server to perform authentication. Please take a look at the comments for a function-specific description.
**./src/contexts/user.context.js**
```js
import { createContext, useState } from "react";
import { App, Credentials } from "realm-web";
import { APP_ID } from "../realm/constants";
// Creating a Realm App Instance
const app = new App(APP_ID);
// Creating a user context to manage and access all the user related functions
// across different components and pages.
export const UserContext = createContext();
export const UserProvider = ({ children }) => {
const user, setUser] = useState(null);
// Function to log in user into our App Service app using their email & password
const emailPasswordLogin = async (email, password) => {
const credentials = Credentials.emailPassword(email, password);
const authenticatedUser = await app.logIn(credentials);
setUser(authenticatedUser);
return authenticatedUser;
};
// Function to sign up user into our App Service app using their email & password
const emailPasswordSignup = async (email, password) => {
try {
await app.emailPasswordAuth.registerUser(email, password);
// Since we are automatically confirming our users, we are going to log in
// the user using the same credentials once the signup is complete.
return emailPasswordLogin(email, password);
} catch (error) {
throw error;
}
};
// Function to fetch the user (if the user is already logged in) from local storage
const fetchUser = async () => {
if (!app.currentUser) return false;
try {
await app.currentUser.refreshCustomData();
// Now, if we have a user, we are setting it to our user context
// so that we can use it in our app across different components.
setUser(app.currentUser);
return app.currentUser;
} catch (error) {
throw error;
}
}
// Function to logout user from our App Services app
const logOutUser = async () => {
if (!app.currentUser) return false;
try {
await app.currentUser.logOut();
// Setting the user to null once loggedOut.
setUser(null);
return true;
} catch (error) {
throw error
}
}
return
{children}
;
}
```
## Create a PrivateRoute page
This is a wrapper page that will only allow authenticated users to access our app’s private pages. We will see it in action in our ./src/App.js file.
**./src/pages/PrivateRoute.page.js**
```js
import { useContext } from "react";
import { Navigate, Outlet, useLocation } from "react-router-dom";
import { UserContext } from "../contexts/user.context";
const PrivateRoute = () => {
// Fetching the user from the user context.
const { user } = useContext(UserContext);
const location = useLocation();
const redirectLoginUrl = `/login?redirectTo=${encodeURI(location.pathname)}`;
// If the user is not logged in we are redirecting them
// to the login page. Otherwise we are letting them to
// continue to the page as per the URL using .
return !user ? : ;
}
export default PrivateRoute;
```
## Create a login page
Next, let’s add a login page.
**./src/pages/Login.page.js**
```js
import { Button, TextField } from "@mui/material";
import { useContext, useEffect, useState } from "react";
import { Link, useLocation, useNavigate } from "react-router-dom";
import { UserContext } from "../contexts/user.context";
const Login = () => {
const navigate = useNavigate();
const location = useLocation();
// We are consuming our user-management context to
// get & set the user details here
const { user, fetchUser, emailPasswordLogin } = useContext(UserContext);
// We are using React's "useState" hook to keep track
// of the form values.
const [form, setForm] = useState({
email: "",
password: ""
});
// This function will be called whenever the user edits the form.
const onFormInputChange = (event) => {
const { name, value } = event.target;
setForm({ ...form, [name]: value });
};
// This function will redirect the user to the
// appropriate page once the authentication is done.
const redirectNow = () => {
const redirectTo = location.search.replace("?redirectTo=", "");
navigate(redirectTo ? redirectTo : "/");
}
// Once a user logs in to our app, we don’t want to ask them for their
// credentials again every time the user refreshes or revisits our app,
// so we are checking if the user is already logged in and
// if so we are redirecting the user to the home page.
// Otherwise we will do nothing and let the user to login.
const loadUser = async () => {
if (!user) {
const fetchedUser = await fetchUser();
if (fetchedUser) {
// Redirecting them once fetched.
redirectNow();
}
}
}
// This useEffect will run only once when the component is mounted.
// Hence this is helping us in verifying whether the user is already logged in
// or not.
useEffect(() => {
loadUser(); // eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
// This function gets fired when the user clicks on the "Login" button.
const onSubmit = async (event) => {
try {
// Here we are passing user details to our emailPasswordLogin
// function that we imported from our realm/authentication.js
// to validate the user credentials and log in the user into our App.
const user = await emailPasswordLogin(form.email, form.password);
if (user) {
redirectNow();
}
} catch (error) {
if (error.statusCode === 401) {
alert("Invalid username/password. Try again!");
} else {
alert(error);
}
}
};
return
LOGIN
Login
Don't have an account? Signup
}
export default Login;
```
## Create a signup page
Now our users can log into the application, but how do they sign up? Time to add a signup page!
**./src/pages/Signup.page.js**
```js
import { Button, TextField } from "@mui/material";
import { useContext, useState } from "react";
import { Link, useLocation, useNavigate } from "react-router-dom";
import { UserContext } from "../contexts/user.context";
const Signup = () => {
const navigate = useNavigate();
const location = useLocation();
// As explained in the Login page.
const { emailPasswordSignup } = useContext(UserContext);
const [form, setForm] = useState({
email: "",
password: ""
});
// As explained in the Login page.
const onFormInputChange = (event) => {
const { name, value } = event.target;
setForm({ ...form, [name]: value });
};
// As explained in the Login page.
const redirectNow = () => {
const redirectTo = location.search.replace("?redirectTo=", "");
navigate(redirectTo ? redirectTo : "/");
}
// As explained in the Login page.
const onSubmit = async () => {
try {
const user = await emailPasswordSignup(form.email, form.password);
if (user) {
redirectNow();
}
} catch (error) {
alert(error);
}
};
return
SIGNUP
Signup
Have an account already? Login
}
export default Signup;
```
## Create a homepage
Our homepage will be a basic page with a title and logout button.
**./src/pages/Home.page.js:**
```js
import { Button } from '@mui/material'
import { useContext } from 'react';
import { UserContext } from '../contexts/user.context';
export default function Home() {
const { logOutUser } = useContext(UserContext);
// This function is called when the user clicks the "Logout" button.
const logOut = async () => {
try {
// Calling the logOutUser function from the user context.
const loggedOut = await logOutUser();
// Now we will refresh the page, and the user will be logged out and
// redirected to the login page because of the component.
if (loggedOut) {
window.location.reload(true);
}
} catch (error) {
alert(error)
}
}
return (
<>
WELCOME TO EXPENGO
Logout
</>
)
}
```
## Putting it all together in App.js
Let’s connect all of our pages in the root React component—App.
**./src/App.js**
```js
import { BrowserRouter, Route, Routes } from "react-router-dom";
import { UserProvider } from "./contexts/user.context";
import Home from "./pages/Home.page";
import Login from "./pages/Login.page";
import PrivateRoute from "./pages/PrivateRoute.page";
import Signup from "./pages/Signup.page";
function App() {
return (
{/* We are wrapping our whole app with UserProvider so that */}
{/* our user is accessible through out the app from any page*/}
} />
} />
{/* We are protecting our Home Page from unauthenticated */}
{/* users by wrapping it with PrivateRoute here. */}
}>
} />
);
}
export default App;
```
## Launch your React app
All have to do now is run the following command from your project directory:
```
npm start
```
Once the compilation is complete, you will be able to access your app from your browser at http://localhost:3000/. You should be able to sign up and log into your app now.
## Conclusion
Woah! We have made a tremendous amount of progress. Authentication is a very crucial part of any app and once that’s done, we can focus on features that will make our users’ lives easier. In the next part of this blog series, we’ll be leveraging App Services GraphQL to perform CRUD operations. I’m excited about that because the basic setup is already over.
If you have any doubts or concerns, please feel free to reach out to us on the MongoDB Community Forums. I have created a [dedicated forum topic for this blog where we can discuss anything related to this blog series.
And before you ask, here’s the GitHub repository, as well!
| md | {
"tags": [
"Atlas",
"JavaScript",
"React"
],
"pageDescription": "Configuring signup and login authentication is a common step for nearly every web application. Learn how to set up email/password authentication in React using MongoDB Atlas App Services.",
"contentType": "Tutorial"
} | Implement Email/Password Authentication in React | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/flask-app-ufo-tracking | created |
You must submit {{message}}.
| md | {
"tags": [
"Python",
"Flask"
],
"pageDescription": "Learn step-by-step how to build a full-stack web application to track reports of unidentified flying objects (UFOs) in your area.",
"contentType": "Tutorial"
} | Build an App With Python, Flask, and MongoDB to Track UFOs | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/slowly-changing-dimensions-application-mongodb | created | # Slowly Changing Dimensions and Their Application in MongoDB
The concept of “slowly changing dimensions” (usually abbreviated as SCD) has been around for a long time and is a staple in SQL-based data warehousing. The fundamental idea is to track all changes to data in the data warehouse over time. The “slowly changing” part of the name refers to the assumption that the data that is covered by this data model changes with a low frequency, but without any apparent pattern in time. This data model is used when the requirements for the data warehouse cover functionality to track and reproduce outputs based on historical states of data.
One common case of this is for reporting purposes, where the data warehouse must explain the difference of a report produced last month, and why the aggregated values are different in the current version of the report. Requirements such as these are often encountered in financial reporting systems.
There are many ways to implement slowly changing dimensions in SQL, referred to as the “types.” Types 0 and 1 are the most basic ones that only keep track of the current state of data (in Type 1) or in the original state (Type 0). The most commonly applied one is Type 2. SCD Type 2 implements three new fields, “validFrom,” “validTo,” and an optional flag on the latest set of data, which is usually called “isValid” or “isEffective.”
**Table of SCD types:**
| | |
| --- | --- |
| **SCD Type** | **Description** |
| SCD Type 0 | Only keep original state, data can not be changed |
| SCD Type 1 | Only keep updated state, history can not be stored |
| SCD Type 2 | Keep history in new row/document |
| SCD Type 3 | Keep history in new fields in same row/document |
| SCD Type 4 | Keep history in separate collection |
| SCD Types >4 | Combinations of previous types — e.g., Type 5 is Type 1 plus Type 4 |
In this simplest implementation of SCD, every record contains the information on the validity period for this set of data and all different validities are kept in the same collection or table.
In applying this same concept to MongoDB’s document data model, the approach is exactly the same as in a relational database. In the comparison of data models, the normalization that is the staple of relational databases is not the recommended approach in the document model, but the details of this have been covered in many blog posts — for example, the 6 Rules of Thumb for MongoDB Schema Design. The concept of slowly changing dimensions applies on a per document basis in the chosen and optimized data model for the specific use case. The best way to illustrate this is in a small example.
Consider the following use case: Your MongoDB stores the prices of a set of items, and you need to keep track of the changes of the price of an item over time, in order to be able to process returns of an item, as the money refunded needs to be the price of the item at the time of purchase. You have a simple collection called “prices” and each document has an itemID and a price.
```
db.prices.insertMany(
{ 'item': 'shorts', 'price': 10 },
{ 'item': 't-shirt', 'price': 2 },
{ 'item': 'pants', 'price': 5 }
]);
```
Now, the price of “pants” changes from 5 to 7. This can be done and tracked by assuming default values for the necessary data fields for SCD Type 2. The default value for “validFrom” is 01.01.1900, “validTo” is 01.01.9999, and isValid is “true.”
The change to the price of the “pants” item is then executed as an insert of the new document, and an update to the previously valid one.
```
let now = new Date();
db.prices.updateOne(
{ 'item': 'pants', "$or":[{"isValid":false},{"isValid":null}]},
{"$set":{"validFrom":new Date("1900-01-01"), "validTo":now,"isValid":false}}
);
db.prices.insertOne(
{ 'item': 'pants', 'price': 7 ,"validFrom":now, "validTo":new Date("9999-01-01"),"isValid":true}
);
```
As it is essential that the chain of validity is unbroken, the two database operations should happen with the same timestamp. Depending on the requirements of the application, it might make sense to wrap these two commands into a transaction to ensure both changes are always applied together. There are also ways to push this process to the background, but as per the initial assumption in the slowly changing dimensions, changes like this are infrequent and data consistency is the highest priority. Therefore, the performance impact of a transaction is acceptable for this use case.
If you then want to query the latest price for an item, it’s as simple as specifying:
```
db.prices.find({ 'item': 'pants','isValid':true});
```
And if you want to query for the state at a specific point in time:
```
let time = new Date("2022-11-16T13:00:00")
db.prices.find({ 'item': 'pants','validFrom':{'$lte':time}, 'validTo':{'$gt':time}});
```
This example shows that the flexibility of the document model allows us to take a relational concept and directly apply it to data inside MongoDB. But it also opens up other methods that are not possible in relational databases. Consider the following: What if you only need to track changes to very few fields in a document? Then you could simply embed the history of a field as an array in the first document. This implements SCD Type 3, storing the history in new fields, but without the limitation and overhead of creating new columns in a relational database. SCD Type 3 in RDMBS is usually limited to storing only the last few changes, as adding new columns on the fly is not possible.
The following aggregation pipeline does exactly that. It changes the price to 7, and stores the previous value of the price with a timestamp of when the old price became invalid in an array called “priceHistory”:
```
db.prices.aggregate([
{ $match: {'item': 'pants'}},
{ $addFields: { price: 7 ,
priceHistory: { $concatArrays:
[{$ifNull: ['$priceHistory', []]},
[{price: "$price",time: now}]]}
}
},
{ $merge: {
into: "prices",
on: "_id",
whenMatched: "merge",
whenNotMatched: "fail"
}}])
```
There are some caveats to that solution which cover large array sizes, but there are known solutions to deal with these kinds of data modeling challenges. In order to avoid large arrays, you could apply the “Outlier” or “Bucketing” patterns of the many possibilities in [MongoDB schema design and many useful explanations on what to avoid.
In this way, you could store the most recent history of data changes in the documents themselves, and if any analysis gets deeper into past changes, it would have to load the older change history from a separate collection. This approach might sound similar to the stated issue of adding new fields in a relational database, but there are two differences: Firstly, MongoDB does not encounter this problem until more than 100 changes are done on a single document. And secondly, MongoDB has tools to dynamically deal with large arrays, whereas in relational DBs, the solution would be to choose a different approach, as even pre-allocating more than 10 columns for changes is not a good idea in SQL.
But in both worlds, dealing with many changes in SCD Type 3 requires an extension to a different SCD type, as having a separate collection for the history is SCD Type 4.
## Outlook Data Lake/Data Federation
The shown example focuses on a strict and accurate representation of changes. Sometimes, there are less strict requirements on the necessity to show historical changes in data. It might be that 95% of the time, the applications using the MongoDB database are only interested in the current state of the data, but some (analytical) queries still need to be run on the full history of data.
In this case, it might be more efficient to store the current version of the data in one collection, and the historical changes in another. The historical collection could then even be removed from the active MongoDB cluster by using MongoDB Atlas Federated Database functionalities, and in the fully managed version using Atlas Online Archive.
If the requirement for tracking the changes is different in a way that not every single change needs to be tracked, but rather a series of checkpoints is required to show the state of data at specific times, then Atlas Data Lake might be the correct solution. With Atlas Data Lake, you are able to extract a snapshot of the data at specific points in time, giving you a similar level of traceability, albeit at fixed time intervals. Initially the concept of SCD was developed to avoid data duplication in such a case, as it does not store an additional document if nothing changes. In today's world where cold storage has become much more affordable, Data Lake offers the possibility to analyze data from your productive system, using regular snapshots, without doing any changes to the system or even increasing the load on the core database.
All in all, the concept of slowly changing dimensions enables you to cover part of the core requirements for a data warehouse by giving you the necessary tools to keep track of all changes.
## Applying SCD methods outside of data warehousing
While the fundamental concept of slowly changing dimensions was developed with data warehouses in mind, another area where derivatives of the techniques developed there can be useful is in event-driven applications. Given the case that you have infrequent events, in different types of categories, it’s oftentimes an expensive database action to find the latest event per category. The process for that might require grouping and/or sorting your data in order to find the current state.
In this case, it might make sense to amend the data model by a flag similar to the “isValid'' flag of the SCD Type 2 example above, or even go one step further and not only store the event time per document, but adding the time of the next event in a similar fashion to the SCD Type 2 implementation. The flag enables very fast queries for the latest set of data per event type, and the date ensures that if you execute a search for a specific point in time, it’s easy and efficient to get the respective event that you are looking for.
In such a case, it might make sense to separate the “events” and their processed versions that include the isValid flag and the validity end date into separate collections, utilizing more of the methodologies of the different types of SCD implementations.
So, the next time you encounter a data model that requires keeping track of changes, think, “SCD could be useful and can easily be applied in the document model.” If you want to implement slowly changing dimensions in your MongoDB use case, consider getting support from the MongoDB Professional Services team. | md | {
"tags": [
"Atlas"
],
"pageDescription": "This article describes how to implement the concept of “slowly changing dimensions” (SCD) in the MongoDB document model and how to efficiently query them.",
"contentType": "Article"
} | Slowly Changing Dimensions and Their Application in MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/johns-hopkins-university-covid-19-data-atlas | created | # How to work with Johns Hopkins University COVID-19 Data in MongoDB Atlas
## TL;DR
Our MongoDB Cluster is running in version 7.0.3.
You can connect to it using MongoDB Compass, the Mongo Shell, SQL or any MongoDB driver supporting at least MongoDB 7.0
with the following URI:
``` none
mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19
```
> `readonly` is the username and the password, they are not meant to be replaced.
## News
### November 15th, 2023
- John Hopkins University (JHU) has stopped collecting data as of March 10th, 2023.
- Here is JHU's GitHub repository.
- First data entry is 2020-01-22, last one is 2023-03-09.
- Cluster now running on 7.0.3
- Removed the database `covid19jhu` with the raw data. Use the much better database `covid19`.
- BI Tools access is now disable.
### December 10th, 2020
- Upgraded the cluster to 4.4.
- Improved the python data import script to calculate the daily values using the existing cumulative values with
an Aggregation Pipeline.
- confirmed_daily.
- deaths_daily.
- recovered_daily.
### May 13th, 2020
- Renamed the field "city" to "county" and "cities" to "counties" where appropriate. They contain the data from the
column "Admin2" in JHU CSVs.
### May 6th, 2020
- The `covid19` database now has 5 collections. More details in
our README.md.
- The `covid19.statistics` collection is renamed `covid19.global_and_us` for more clarity.
- Maxime's Charts are now
using the `covid19.global_and_us` collection.
- The dataset is updated hourly so any commit done by JHU will be reflected at most one hour later in our cluster.
## Table of Contents
- Introduction
- The MongoDB Dataset
- Get Started
- Explore the Dataset with MongoDB Charts
- Explore the Dataset with MongoDB Compass
- Explore the Dataset with the MongoDB Shell
- Accessing the Data with Java
- Accessing the Data with Node.js
- Accessing the Data with Python
- Accessing the Data with Golang
- Accessing the Data with Google Colaboratory
- Accessing the Data with Business Intelligence Tools
- Accessing the Data with any SQL tool
- Take a copy of the data
- Wrap up
- Sources
## Introduction
As the COVID-19 pandemic has swept the globe, the work of JHU (Johns Hopkins University) and
its COVID-19 dashboard has become vitally important in keeping people informed
about the progress of the virus in their communities, in their countries, and in the world.
JHU not only publishes their dashboard,
but they make the data powering it freely available for anyone to use.
However, their data is delivered as flat CSV files which you need to download each time to then query. We've set out to
make that up-to-date data more accessible so people could build other analyses and applications directly on top of the
data set.
We are now hosting a service with a frequently updated copy of the JHU data in MongoDB Atlas, our database in the cloud.
This data is free for anyone to query using the MongoDB Query language and/or SQL. We also support
a variety of BI tools directly, so you can query the data with Tableau,
Qlik and Excel.
With the MongoDB COVID-19 dataset there will be no more manual downloads and no more frequent format changes. With this
data set, this service will deliver a consistent JSON and SQL view every day with no
downstream ETL required.
None of the actual data is modified. It is simply structured to make it easier to query by placing it within
a MongoDB Atlas cluster and by creating some convenient APIs.
## The MongoDB Dataset
All the data we use to create the MongoDB COVID-19 dataset comes from the JHU dataset. In their
turn, here are the sources they are using:
- the World Health Organization,
- the National Health Commission of the People's Republic of China,
- the United States Centre for Disease Control,
- the Australia Government Department of Health,
- the European Centre for Disease Prevention and Control,
- and many others.
You can read the full list on their GitHub repository.
Using the CSV files they provide, we are producing two different databases in our cluster.
- `covid19jhu` contains the raw CSV files imported with
the mongoimport tool,
- `covid19` contains the same dataset but with a clean MongoDB schema design with all the good practices we are
recommending.
Here is an example of a document in the `covid19` database:
``` javascript
{
"_id" : ObjectId("5e957bfcbd78b2f11ba349bf"),
"uid" : 312,
"country_iso2" : "GP",
"country_iso3" : "GLP",
"country_code" : 312,
"state" : "Guadeloupe",
"country" : "France",
"combined_name" : "Guadeloupe, France",
"population" : 400127,
"loc" : {
"type" : "Point",
"coordinates" : -61.551, 16.265 ]
},
"date" : ISODate("2020-04-13T00:00:00Z"),
"confirmed" : 143,
"deaths" : 8,
"recovered" : 67
}
```
The document above was obtained by joining together the file `UID_ISO_FIPS_LookUp_Table.csv` and the CSV files time
series you can find
in [this folder.
Some fields might not exist in all the documents because they are not relevant or are just not provided
by JHU. If you want more details, run a schema analysis
with MongoDB Compass on the different collections available.
If you prefer to host the data yourself, the scripts required to download and transform the JHU data are
open-source. You
can view them and instructions for how to use them on our GitHub repository.
In the `covid19` database, you will find 5 collections which are detailed in
our GitHub repository README.md file.
- metadata
- global (the data from the time series global files)
- us_only (the data from the time series US files)
- global_and_us (the most complete one)
- countries_summary (same as global but countries are grouped in a single doc for each date)
## Get Started
You can begin exploring the data right away without any MongoDB or programming experience
using MongoDB Charts
or MongoDB Compass.
In the following sections, we will also show you how to consume this dataset using the Java, Node.js and Python drivers.
We will show you how to perform the following queries in each language:
- Retrieve the last 5 days of data for a given place,
- Retrieve all the data for the last day,
- Make a geospatial query to retrieve data within a certain distance of a given place.
### Explore the Dataset with MongoDB Charts
With Charts, you can create visualisations of the data using any of the
pre-built graphs and charts. You can
then arrange this into a unique dashboard,
or embed the charts in your pages or blogs.
:charts]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4266-8264-d37ce88ff9fa
theme=light autorefresh=3600}
> If you want to create your own MongoDB Charts dashboard, you will need to set up your
> own [Free MongoDB Atlas cluster and import the dataset in your cluster using
> the import scripts or
> use `mongoexport & mongoimport` or `mongodump & mongorestore`. See this section for more
> details: Take a copy of the data.
### Explore the Dataset with MongoDB Compass
Compass allows you to dig deeper into the data using
the MongoDB Query Language or via
the Aggregation Pipeline visual editor. Perform a range of
operations on the
data, including mathematical, comparison and groupings.
Create documents that provide unique insights and interpretations. You can use the output from your pipelines
as data-sources for your Charts.
For MongoDB Compass or your driver, you can use this connection string.
```
mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19
```
### Explore the Dataset with the MongoDB Shell
Because we store the data in MongoDB, you can also access it via
the MongoDB Shell or
using any of our drivers. We've limited access to these collections to 'read-only'.
You can find the connection strings for the shell and Compass below, as well as driver examples
for Java, Node.js,
and Python to get you started.
``` shell
mongo "mongodb+srv://covid-19.hip2i.mongodb.net/covid19" --username readonly --password readonly
```
### Accessing the Data with Java
Our Java examples are available in
our Github Repository Java folder.
You need the three POJOs from
the Java Github folder
to make this work.
### Accessing the Data with Node.js
Our Node.js examples are available in
our Github Repository Node.js folder.
### Accessing the Data with Python
Our Python examples are available in
our Github Repository Python folder.
### Accessing the Data with Golang
Our Golang examples are available in
our Github Repository Golang folder.
### Accessing the Data with Google Colaboratory
If you have a Google account, a great way to get started is with
our Google Colab Notebook.
The sample code shows how to install pymongo and use it to connect to the MongoDB COVID-19 dataset. There are some
example queries which show how to query the data and display it in the notebook, and the last example demonstrates how
to display a chart using Pandas & Matplotlib!
If you want to modify the notebook, you can take a copy by selecting "Save a copy in Drive ..." from the "File" menu,
and then you'll be free to edit the copy.
### Accessing the Data with Business Intelligence Tools
You can get lots of value from the dataset without any programming at all. We've enabled
the Atlas BI Connector (not anymore, see News section), which exposes
an SQL interface to MongoDB's document structure. This means you can use data analysis and dashboarding tools
like Tableau, Qlik Sense,
and even MySQL Workbench to analyze, visualise and extract understanding
from the data.
Here's an example of a visualisation produced in a few clicks with Tableau:
Tableau is a powerful data visualisation and dashboard tool, and can be connected to our COVID-19 data in a few steps.
We've written a short tutorial
to get you up and running.
### Accessing the Data with any SQL tool
As mentioned above, the Atlas BI Connector is activated (not anymore, see News section), so you can
connect any SQL tool to this cluster using the following connection information:
- Server: covid-19-biconnector.hip2i.mongodb.net,
- Port: 27015,
- Database: covid19,
- Username: readonly or readonly?source=admin,
- Password: readonly.
### Take a copy of the data
Accessing *our* copy of this data in a read-only database is useful, but it won't be enough if you want to integrate it
with other data within a single MongoDB cluster. You can obtain a copy of the database, either to use offline using a
different tool outside of MongoDB, or to load into your own MongoDB instance. `mongoexport` is a command-line tool that
produces a JSONL or CSV export of data stored in a MongoDB instance. First, follow
these instructions to install the MongoDB Database Tools.
Now you can run the following in your console to download the metadata and global_and_us collections as jsonl files in
your current directory:
``` bash
mongoexport --collection='global_and_us' --out='global_and_us.jsonl' --uri="mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19"
mongoexport --collection='metadata' --out='metadata.jsonl' --uri="mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19"
```
> Use the `--jsonArray` option if you prefer to work with a JSON array rather than a JSONL file.
Documentation for all the features of `mongoexport` is available on
the MongoDB website and with the command `mongoexport --help`.
Once you have the data on your computer, you can use it directly with local tools, or load it into your own MongoDB
instance using mongoimport.
``` bash
mongoimport --collection='global_and_us' --uri="mongodb+srv://:@.mongodb.net/covid19" global_and_us.jsonl
mongoimport --collection='metadata' --uri="mongodb+srv://:@.mongodb.net/covid19" metadata.jsonl
```
> Note that you cannot run these commands against our cluster because the user we gave you (`readonly:readonly`) doesn't
> have write permission on this cluster.
> Read our Getting Your Free MongoDB Atlas Cluster blog post if you want to know more.
Another smart way to duplicate the dataset in your own cluster would be to use `mongodump` and `mongorestore`. Apart
from being more efficient, it will also grab the indexes definition along with the data.
``` bash
mongodump --uri="mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19"
mongorestore --drop --uri=""
```
## Wrap up
We see the value and importance of making this data as readily available to everyone as possible, so we're not stopping
here. Over the coming days, we'll be adding a GraphQL and REST API, as well as making the data available within Excel
and Google Sheets.
We've also launched an Atlas credits program for
anyone working on detecting, understanding, and stopping the spread of COVID-19.
If you are having any problems accessing the data or have other data sets you would like to host please contact us
on the MongoDB community. We would also love to showcase any services you build on top
of this data set. Finally please send in PRs for any code changes you would like to make to the examples.
You can also reach out to the authors
directly (Aaron Bassett, Joe Karlsson, Mark Smith,
and Maxime Beugnet) on Twitter.
## Sources
- MongoDB Open Data COVID-19 GitHub repository
- JHU Dataset on GitHub repository
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Making the Johns Hopkins University COVID-19 Data open and accessible to all with MongoDB",
"contentType": "Article"
} | How to work with Johns Hopkins University COVID-19 Data in MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-orms-odms-libraries | created | # MongoDB ORMs, ODMs, and Libraries
Though developers have always been capable of manually writing complex queries to interact with a database, this approach can be tedious and error-prone. Object-Relational Mappers (or ORMs) improve the developer experience, as they accomplish multiple meaningful tasks:
* Facilitating interactions between the database and an application by abstracting away the need to write raw SQL or database query language.
* Managing serialization/deserialization of data to objects.
* Enforcement of schema.
So, while it’s true that MongoDB offers Drivers with idiomatic APIs and helpers for most programming languages, sometimes a higher level abstraction is desirable. Developers are used to interacting with data in a more declarative fashion (LINQ for C#, ActiveRecord for Ruby, etc.) and an ORM facilitates code maintainability and reuse by allowing developers to interact with data as objects.
MongoDB provides a number of ORM-like libraries, and our community and partners have as well! These are sometimes referred to as ODMs (Object Document Mappers), as MongoDB is not a relational database management system. However, they exist to solve the same problem as ORMs do and the terminology can be used interchangeably.
The following are some examples of the best MongoDB ORM or ODM libraries for a number of programming languages, including Ruby, Python, Java, Node.js, and PHP.
## Beanie
Beanie is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on Motor (an asynchronous MongoDB driver) and Pydantic.
When using Beanie, each database collection has a corresponding document that is used to interact with that collection. In addition to retrieving data, Beanie allows you to add, update, and delete documents from the collection. Beanie saves you time by removing boilerplate code, and it helps you focus on the parts of your app that actually matter.
See the Beanie documentation for more information.
## Doctrine
Doctrine is a PHP MongoDB ORM, even though it’s referred to as an ODM. This library provides PHP object mapping functionality and transparent persistence for PHP objects to MongoDB, as well as a mechanism to map embedded or referenced documents. It can also create references between PHP documents in different databases and work with GridFS buckets.
See the Doctrine MongoDB ODM documentation for more information.
## Mongoid
Most Ruby-based applications are built using the Ruby on Rails framework. As a result, Rails’ Active Record implementation, conventions, CRUD API, and callback mechanisms are second nature to Ruby developers. So, as far as a MongoDB ORM for Ruby, the Mongoid ODM provides API parity wherever possible to ensure developers working with a Rails application and using MongoDB can do so using methods and mechanics they’re already familiar with.
See the Mongoid documentation for more information.
## Mongoose
If you’re seeking an ORM for NodeJS and MongoDB, look no further than Mongoose. This Node.js-based Object Data Modeling (ODM) library for MongoDB is akin to an Object Relational Mapper (ORM) such as SQLAlchemy. The problem that Mongoose aims to solve is allowing developers to enforce a specific schema at the application layer. In addition to enforcing a schema, Mongoose also offers a variety of hooks, model validation, and other features aimed at making it easier to work with MongoDB.
See the Mongoose documentation or MongoDB & Mongoose: Compatibility and Comparison for more information.
## MongoEngine
MongoEngine is a Python ORM for MongoDB. Branded as a Document-Object Mapper, it uses a simple declarative API, similar to the Django ORM.
It was first released in 2015 as an open-source project, and the current version is built on top of PyMongo, the official Python Driver by MongoDB.
See the MongoEngine documentation for more information.
## Prisma
Prisma is a new kind of ORM for Node.js and Typescript that fundamentally differs from traditional ORMs. With Prisma, you define your models in the declarative Prisma schema, which serves as the single source of truth for your database schema and the models in your programming language. The Prisma Client will read and write data to your database in a type-safe manner, without the overhead of managing complex model instances. This makes the process of querying data a lot more natural as well as more predictable since Prisma Client always returns plain JavaScript objects.
Support for MongoDB was one of the most requested features since the initial release of the Prisma ORM, and was added in version 3.12.
See Prisma & MongoDB for more information.
## Spring Data MongoDB
If you’re seeking a Java ORM for MongoDB, Spring Data for MongoDB is the most popular choice for Java developers. The Spring Data project provides a familiar and consistent Spring-based programming model for new datastores while retaining store-specific features and capabilities.
Key functional areas of Spring Data MongoDB that Java developers will benefit from are a POJO centric model for interacting with a MongoDB DBCollection and easily writing a repository-style data access layer.
See the Spring Data MongoDB documentation or the Spring Boot Integration with MongoDB Tutorial for more information.
## Go Build Something Awesome!
Though not an exhaustive list of the available MongoDB ORM and ODM libraries available right now, the entries above should allow you to get started using MongoDB in your language of choice more naturally and efficiently.
If you’re looking for assistance or have any feedback don’t hesitate to engage on our Community Forums. | md | {
"tags": [
"MongoDB",
"Ruby",
"Python",
"Java"
],
"pageDescription": "MongoDB has a number of ORMs, ODMs, and Libraries that simplify the interaction between your application and your MongoDB cluster. Build faster with the best database for Ruby, Python, Java, Node.js, and PHP using these libraries, ORMs, and ODMs.",
"contentType": "Article"
} | MongoDB ORMs, ODMs, and Libraries | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/source-generated-classes-nullability-realm | created | # Source Generated Classes and Nullability in Realm .NET
The latest releases of Realm .NET have included some interesting updates that we would love to share — in particular, source generated classes and support for nullability annotation.
## Source generated classes
Realm 10.18.0 introduced `Realm.SourceGenerator`, a source generator that can generate Realm model classes. This is part of our ongoing effort to modernize the Realm library, and will allow us to introduce certain language level features more easily in the future.
The migration to the new source generated classes is quite straightforward. All you need to do is:
* Declare the Realm classes as `partial`, including all the eventual containing classes.
* Swap out the base Realm classes (`RealmObject`, `EmbeddedObject`, `AsymmetricObject`) for the equivalent interfaces (`IRealmObject`, `IEmbeddedObject`, `IAsymmetricObject`).
* Declare `OnManaged` and `OnPropertyChanged` methods as `partial` instead of overriding them, if they are used.
The property definition remains the same, and the source generator will take care of adding the full implementation of the interfaces.
To give an example, if your model definition looks like this:
```csharp
public class Person: RealmObject
{
public string Name { get; set; }
public PhoneNumber Phone { get; set; }
protected override void OnManaged()
{
//...
}
protected override void OnPropertyChanged(string propertyName)
{
//...
}
}
public class PhoneNumber: EmbeddedObject
{
public string Number { get; set; }
public string Prefix { get; set; }
}
```
This is how it should look like after you migrated it:
```csharp
public partial class Person: IRealmObject
{
public string Name { get; set; }
public PhoneNumber Phone { get; set; }
partial void OnManaged()
{
//...
}
partial void OnPropertyChanged(string propertyName)
{
//...
}
}
public partial class PhoneNumber: IEmbeddedObject
{
public string Number { get; set; }
public string Prefix { get; set; }
}
```
The classic Realm model definition is still supported, but it will not receive some of the new updates, such as the support for nullability annotations, and will be phased out in the future.
## Nullability annotations
Realm 10.20.0 introduced full support for nullability annotations in the model definition for source generated classes. This allows you to use Realm models as usual when nullable context is active, and removes the need to use the `Required` attribute to indicate required properties, as this information will be inferred directly from the nullability status.
To sum up the expected nullability annotations:
* Value type properties, such as `int`, can be declared as before, either nullable or not.
* `string` and `byte]` properties now cannot be decorated anymore with the `Required` attribute, as this information will be inferred from the nullability. If the property is not nullable, then it is considered equivalent as declaring it with the `Required` attribute.
* Collections (list, sets, dictionaries, and backlinks) cannot be declared nullable, but their parameters may be.
* Properties that link to a single Realm object are inherently nullable, and thus the type must be defined as nullable.
* Lists, sets, and backlinks of objects cannot contain null values, and thus the type parameter must be non-nullable.
* Dictionaries of object values can contain null, and thus the type parameter must be nullable.
Defining the properties with a different nullability annotation than what has been outlined will raise a diagnostic error. For instance:
```cs
public partial class Person: IRealmObject
{
//string (same for byte[])
public string Name { get; set; } //Correct, required property
public string? Name { get; set; } //Correct, non-required property
//Collections
public IList IntList { get; } //Correct
public IList IntList { get; } //Correct
public IList? IntList { get; } //Error
//Object
public Dog? MyDog { get; set; } //Correct
public Dog MyDog { get; set; } //Error
//List of objects
public IList MyDogs { get; } //Correct
public IList MyDogs { get; } //Error
//Set of objects
public ISet MyDogs { get; } //Correct
public ISet MyDogs { get; } //Error
//Dictionary of objects
public IDictionary MyDogs { get; } //Correct
public IDictionary MyDogs { get; } //Error
//Backlink
[Realms.Backlink("...")]
public IQueryable MyDogs { get; } //Correct
[Realms.Backlink("...")]
public IQueryable MyDogs { get; } //Error
}
```
We realize that some developers would prefer to have more freedom in the nullability annotation of object properties, and it is possible to do so by setting `realm.ignore_objects_nullability = true` in a global configuration file (more information about this can be found in the [.NET documentation). If this option is enabled, all the object properties (including collections) will be considered valid, and the nullability annotations will be ignored.
Finally, please note that this will only work with source generated classes, and not with the classic Realm model definition. If you want more information, you can take a look at the Realm .NET repository and at our documentation.
Want to continue the conversation? Head over to our community forums! | md | {
"tags": [
"Realm",
".NET"
],
"pageDescription": "The latest releases of Realm .NET have included some interesting updates that we would love to share — in particular, source generated classes and support for nullability annotation.",
"contentType": "Article"
} | Source Generated Classes and Nullability in Realm .NET | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/swift/swift-single-collection-pattern | created | # Working with the MongoDB Single-Collection Pattern in Swift
It's a MongoDB axiom that you get the best performance and scalability by storing together the data that's most commonly accessed together.
The simplest and most obvious approach to achieve this is to embed all related data into a single document. This works great in many cases, but there are a couple of scenarios where it can become inefficient:
* (Very) many to many relationships. This can lead to duplicated data. This duplication is often acceptable — storage is comparatively cheap, after all. It gets more painful when the duplicated data is frequently modified. You then have the cost of updating every document which embeds that data.
* Reading small parts of large documents. Even if your query is only interested in a small fraction of fields in a document, the whole document is brought into cache — taking up memory that could be used more effectively.
* Large, mutable documents. Whenever your application makes a change to a document, the entire document must be written back to disk at some point (could be combined with other changes to the same document). WiredTiger writes data to disk in 4 KB blocks after compression — that typically maps to a 16-20 KB uncompressed document. If you're making lots of small edits to a 20+ KB document, then you may be wasting disk IO.
If embedding all of the data in a single document isn't the right pattern for your application, then consider the single-collection design. The single-collection pattern can deliver comparable read performance to embedded documents, while also optimizing for updates.
There are variants on the single-collection pattern, but for this post, I focus on the key aspects:
* Related data that's queried together is stored in the same collection.
* The documents can have different structures.
* Indexes are added so that all of the data for your frequent queries can be fetched with a single index lookup.
At this point, your developer brain may be raising questions about how your application code can cope with this. It's common to read the data from a particular collection, and then have the MongoDB driver convert that document into an object of a specific class. How does that work if the driver is fetching documents with different shapes from the same collection? This is the primary thing I want to demonstrate in this post.
I'll be using Swift, but the same principles apply to other languages. To see how to do this with Java/Spring Data, take a look at Single-Collection Designs in MongoDB with Spring Data.
## Running the example code
I recently started using the MongoDB Swift Driver for the first time. I decided to build a super-simple Mac desktop app that lets you browse your collections (which MongoDB Compass does a **much** better job of) and displays Change Stream events in real time (which Compass doesn't currently do).
You can download the code from the Swift-Change-Streams repo. Just build and run from Xcode.
Provide your connection-string and then browse your collections. Select the "Enable change streams" option to display change events in real time.
The app will display data from most collections as generic JSON documents, with no knowledge of the schema. There's a special case for a collection named "Collection" in a database named "Single" — we'll look at that next.
### Sample data
The Simple.Collection collection needs to contain these (or similar) documents:
```json
{ _id: 'basket1', docType: 'basket', customer: 'cust101' }
{ _id: 'basket1-item1', docType: 'item', name: 'Fish', quantity: 5 }
{ _id: 'basket1-item2', docType: 'item', name: 'Chips', quantity: 3 }
```
This data represents a shopping basket with an `_id` of "basket1". There are two items associated with `basket1 — basket1-item1` and `basket1-item2`. A single query will fetch all three documents for the basket (find all documents where `_id` starts with "basket1"). There is always an index on the `_id` attribute, and so that index will be used.
Note that all of the data for a basket in this dataset is extremely small — **well** below the 16-20K threshold — and so in a real life example, I'd actually advise embedding everything in a single document instead. The single-collection pattern would make more sense if there were a large number of line items, and each was large (e.g., if they embedded multiple thumbnail images).
Each document also has a `docType` attribute to identify whether the document refers to the basket itself, or one of the associated items. If your application included a common query to fetch just the basket or just the items associated with the basket, then you could add a composite index: `{ _id: 1, docType: 1}`.
Other uses of the `docType` field include:
* A prompt to help humans understand what they're looking at in the collection.
* Filtering the data returned from a query to just certain types of documents from the collection.
* Filtering which types of documents are included when using MongoDB Compass to examine a collection's schema.
* Allowing an application to identify what type of document its received. The application code can then get the MongoDB driver to unmarshal the document into an object of the correct class. This is what we'll look at next.
### Handling different document types from the same collection
We'll use the same desktop app to see how your code can discriminate between different types of documents from the same collection.
The app has hardcoded knowledge of what a basket and item documents looks like. This allows it to render the document data in specific formats, rather than as a JSON document:
The code to determine the document `docType` and convert the document to an object of the appropriate class can be found in CollectionView.swift.
CollectionView fetches all of the matching documents from MongoDB and stores them in an array of `BSONDocument`s:
```swift
@State private var docs = BSONDocument
```
The application can then loop over each document in `docs`, checks the `docType` attribute, and then decides what to do based on that value:
```swift
List(docs, id: \.hashValue) { doc in
if path.dbName == "Single" && path.collectionName == "Collection" {
if let docType = doc"docType"] {
switch docType {
case "basket":
if let basket = basket(doc: doc) {
BasketView(basket: basket)
}
case "item":
if let item = item(doc: doc) {
ItemView(item: item)
}
default:
Text("Unknown doc type")
}
}
} else {
JSONView(doc: doc)
}
}
```
If `docType == "basket"`, then the code converts the generic doc into a `Basket` object and passes it to `BasketView` for rendering.
This is the `Basket` class, including initializer to create a `Basket` from a `BSONDocument`:
```swift
struct Basket: Codable {
let _id: String
let docType: String
let customer: String
init(doc: BSONDocument) {
do {
self = try BSONDecoder().decode(Basket.self, from: doc)
} catch {
_id = "n/a"
docType = "basket"
customer = "n/a"
print("Failed to convert BSON to a Basket: \(error.localizedDescription)")
}
}
}
```
Similarly for `Item`s:
```swift
struct Item: Codable {
let _id: String
let docType: String
let name: String
let quantity: Int
init(doc: BSONDocument) {
do {
self = try BSONDecoder().decode(Item.self, from: doc)
} catch {
_id = "n/a"
docType = "item"
name = "n/a"
quantity = 0
print("Failed to convert BSON to a Item: \(error.localizedDescription)")
}
}
}
```
The sub-views can then use the attributes from the properly-typed object to render the data appropriately:
```swift
struct BasketView: View {
let basket: Basket
var body: some View {
VStack {
Text("Basket")
.font(.title)
Text("Order number: \(basket._id)")
Text("Customer: \(basket.customer)")
}
.padding()
.background(.secondary)
.clipShape(RoundedRectangle(cornerRadius: 15.0))
}
}
```
```swift
struct ItemView: View {
let item: Item
var body: some View {
VStack {
Text("Item")
.font(.title)
Text("Item name: \(item.name)")
Text("Quantity: \(item.quantity)")
}
.padding()
.background(.secondary)
.clipShape(RoundedRectangle(cornerRadius: 15.0))
}
}
```
### Conclusion
The single-collection pattern is a way to deliver read and write performance when embedding or [other design patterns aren't a good fit.
This pattern breaks the 1-1 mapping between application classes and MongoDB collections that many developers might assume. This post shows how to work around that:
* Extract a single docType field from the BSON document returned by the MongoDB driver.
* Check the value of docType and get the MongoDB driver to map the BSON document into an object of the appropriate class.
Questions? Comments? Head over to our Developer Community to continue the conversation! | md | {
"tags": [
"Swift",
"MongoDB"
],
"pageDescription": "You can improve application performance by storing together data that’s accessed together. This can be done through embedding sub-documents, or by storing related documents in the same collection — even when they have different shapes. This post explains how to work with these polymorphic MongoDB collections from your Swift app.",
"contentType": "Quickstart"
} | Working with the MongoDB Single-Collection Pattern in Swift | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-soccer | created | # Atlas Search is a Game Changer!
Every four years, for the sake of blending in, I pretend to know soccer (football, for my non-American friends). I smile. I cheer. I pretend to understand what "offsides" means. But what do I know about soccer, anyway? My soccer knowledge is solely defined by my status as a former soccer mom with an addiction to Ted Lasso.
When the massive soccer tournaments are on, I’m overwhelmed by the exhilarated masses. Painted faces to match their colorful soccer jerseys. Jerseys with unfamiliar names from far away places. I recognize Messi and Ronaldo, but the others? Mkhitaryan, Szczęsny, Großkreutz? How can I look up their stats to feign familiarity when I have no idea how to spell their names?
Well, now there’s an app for that. And it’s built with Atlas Search: www.atlassearchsoccer.com. Check out the video tutorial:
:youtube]{vid=1uTmDNTdgaw&t}
**Build your own dream team!**
With Atlas Search Soccer, you can scour across 22,000 players to build your own dream team of players across national and club teams. This instance of Atlas Search lets you search on a variety of different parameters and data types. Equipped with only a search box, sliders, and checkboxes, find the world's best players with the most impossible-to-spell names to build out your own dream team. Autocomplete, wildcard, and filters to find Ibrahimović, Błaszczykowski, and Szczęsny? No problem!
When you pick a footballer for your team, he is written to local storage on your device. That way, your team stays warmed up and on the pitch even after you close your browser. You can then compare your dream team with your friends.
**Impress your soccerphile friends!**
Atlas Search Soccer grants you *instant* credibility in sports bars. Who is the best current French player? Who plays goalie for Arsenal? Is Ronaldo from Portugal or Brazil? You can say with confidence because you have the *DATA!* Atlas Search lets you find it fast!
**Learn all the $search Skills and Drills!**
As you interact with the application, you'll see the $search operator in a MongoDB aggregation pipeline live in-action! Click on the Advanced Scouting image for more options using the compound operator. Learn all the ways and plays to build complex, fine-grained, full-text searches across text, date, and numerics.
* search operators
* text
* wildcard
* autocomplete
* range
* moreLikeThis
* fuzzy matching
* indexes and analyzers
* compound operator
* relevance based scoring
* custom score modifiers
* filters, facets and counts
Over the next season, we will launch a series of tutorials, breaking down how to implement all of these features. We can even cover GraphQL and the Data API if we head into extra time. And of course, we will provide tips and tricks for optimal performance.
**Gain a home-field advantage by playing in your own stadium!**
[Here is the repo so you can build Atlas Search Soccer on your own free-forever cluster.
So give it a shot. You'll be an Atlas Search pro in no time!
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Atlas Search is truly a game changer to quickly build fine-grained search functionality into your applications. See how with this Atlas Search Soccer demo app.",
"contentType": "Article"
} | Atlas Search is a Game Changer! | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/interact-aws-lambda-function-csharp | created | # Interact with MongoDB Atlas in an AWS Lambda Function Using C#
# Interact with MongoDB Atlas in an AWS Lambda Function Using C#
AWS Lambda is an excellent choice for C# developers looking for a solid serverless solution with many integration options with the rest of the AWS ecosystem. When a database is required, reading and writing to MongoDB Atlas at lightning speed is effortless because Atlas databases can be instantiated in the same data center as your AWS Lambda function.
In this tutorial, we will learn how to create a C# serverless function that efficiently manages the number of MongoDB Atlas connections to make your Lambda function as scalable as possible.
## The prerequisites
* Knowledge of the C# programming language.
* A MongoDB Atlas cluster with sample data, network access (firewall), and user roles already configured.
* An Amazon Web Services (AWS) account with a basic understanding of AWS Lambda.
* Visual Studio with the AWS Toolkit and the Lamda Templates installed (official tutorial).
## Create and configure your Atlas database
This step-by-step MongoDB Atlas tutorial will guide you through creating an Atlas database (free tier available) and loading the sample data.
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
We will open the network access to any incoming IP to keep this tutorial simple and make it work with the free Atlas cluster tier. Here's how to add an IP to your Atlas project. Adding 0.0.0.0 means that any external IP can access your cluster.
In a production environment, you should restrict access and follow best MongoDB security practices, including using network peering between AWS Lambda and MongoDB Atlas. The free cluster tier does not support peering.
## Build an AWS Lambda function with C#
In Visual Studio, create a basic AWS lambda project using the "AWS Lambda Project (.NET Core - C#)" project template with the "Empty Function" blueprint. We'll use that as the basis of this tutorial. Here's the official AWS tutorial to create such a project, but essentially:
1. Open Visual Studio, and on the File menu, choose New, Project.
2. Create a new "AWS Lambda Project (.NET Core - C#)" project.
3. We'll name the project "AWSLambda1."
Follow the official AWS tutorial above to make sure that you can upload the project to Lambda and that it runs. If it does, we're ready to make changes to connect to MongoDB from AWS Lambda!
In our project, the main class is called `Function`. It will be instantiated every time the Lambda function is triggered. Inside, we have a method called `FunctionHandler`, (`Function:: FunctionHandler`), which we will designate to Lambda as the entry point.
## Connecting to MongoDB Atlas from a C# AWS Lambda function
Connecting to MongoDB requires adding the MongoDB.Driver (by MongoDB Inc) package in your project's packages.
Next, add the following namespaces at the top of your source file:
```
using MongoDB.Bson;
using MongoDB.Driver;
```
In the Function class, we will declare a static MongoClient member. Having it as a `static` member is crucial because we want to share it across multiple instances that AWS Lambda could spawn.
Although we don't have complete control over, or visibility into, the Lambda serverless environment, this is the best practice to keep the number of connections back to the Atlas cluster to a minimum.
If we did not declare MongoClient as `static`, each class instance would create its own set of resources. Instead, the static MongoClient is shared among multiple class instances after a first instance was created (warm start). You can read more technical details about managing MongoDB Atlas connections with AWS Lambda.
We will also add a `CreateMongoClient()` method that initializes the MongoDB client when the class is instantiated. Now, things should look like this:
```
public class Function
{
private static MongoClient? Client;
private static MongoClient CreateMongoClient()
{
var mongoClientSettings = MongoClientSettings.FromConnectionString(Environment.GetEnvironmentVariable("MONGODB_URI"));
return new MongoClient(mongoClientSettings);
}
static Function()
{
Client = CreateMongoClient();
}
...
}
```
To keep your MongoDB credentials safe, your connection string can be stored in an AWS Lambda environment variable. The connection string looks like this below, and here's how to get it in Atlas.
`mongodb+srv://USER:PASSWORD@INSTANCENAME.owdak.mongodb.net/?retryWrites=true&w=majority`
**Note**: Visual Studio might store the connection string with your credentials into a aws-lambda-tools-defaults.json file at some point, so don't include that in a code repository.
If you want to use environment variables in the Mock Lambda Test Tool, you must create a specific "Mock Lambda Test Tool" profile with its own set of environment variables in `aws-lambda-tools-defaults.json` (here's an example).
You can learn more about AWS Lambda environment variables. However, be aware that such variables can be set from within your Visual Studio when publishing to AWS Lambda or directly in the AWS management console on your AWS Lambda function page.
For testing purposes, and if you don't want to bother, some people hard-code the connection string as so:
```
var mongoClientSettings = FromConnectionString("mongodb+srv://USER:PASSWORD@instancename.owdak.mongodb.net/?retryWrites=true&w=majority");
```
Finally, we can modify the FunctionHandler() function to read the first document from the sample\_airbnb.listingsAndReviews database and collection we preloaded in the prerequisites.
The try/catch statements are not mandatory, but they can help detect small issues such as the firewall not being set up, or other configuration errors.
```
public string FunctionHandler(string input, ILambdaContext context)
{
if (Client != null)
{
try
{
var database = Client.GetDatabase("sample_airbnb");
var collection = database.GetCollection("listingsAndReviews");
var result = collection.Find(FilterDefinition.Empty).First();
return result.ToString();
}
catch
{
return "Handling failed";
}
} else
{
return "DB not initialized";
}
}
```
Using the "listingsAndReviews" collection (a "table" in SQL jargon) in the "sample\_airbnb" database, the code fetches the first document of the collection.
`collection.Find()` normally takes a MongoDB Query built as a BsonDocument, but in this case, we only need an empty query.
## Publish to AWS and test
It's time to upload it to AWS Lambda. In the Solution Explorer, right-click on the project and select "Publish to AWS Lambda." Earlier, you might have done this while setting up the project using the official AWS Lambda C# tutorial.
If this is the first time you're publishing this function, take the time to give it a name (we use "mongdb-csharp-function-001"). It will be utilized during the initial Lambda function creation.
In the screenshot below, the AWS Lambda function Handler ("Handler") information is essential as it tells Lambda which method to call when an event is triggered. The general format is Assembly::Namespace.ClassName::MethodName
In our case, the handler is `AWSLambda1::AWSLambda1.Function::FunctionHandler`.
If the option is checked, this dialog will save these options in the `aws-lambda-tools-defaults.json` file.
Click "Next" to see the second upload screen. The most important aspect of it is the environment variables, such as the connection string.
When ready, click on "Upload." Visual Studio will create/update your Lambda function to AWS and launch a test window where you can set your sample input and execute the method to see its response.
Our Lambda function expects an input string, so we'll use the "hello" string in our Sample Input, then click the "Invoke" button. The execution's response will be sent to the "Response" field to the right. As expected, the first database record is converted into a string, as shown below.
## Conclusion
We just learned how to build a C# AWS Lambda serverless function efficiently by creating and sharing a MongoDB client and connecting multiple class instances. If you're considering building with a serverless architecture and AWS Lambda, MongoDB Atlas is an excellent option.
The flexibility of our document model makes it easy to get started quickly and evolve your data structure over time. Create a free Atlas cluster now to try it.
If you want to learn more about our MongoDB C# driver, refer to the continuously updated documentation. You can do much more with MongoDB Atlas, and our C# Quick Start is a great first step on your MongoDB journey. | md | {
"tags": [
"C#",
"MongoDB",
".NET",
"AWS"
],
"pageDescription": "In this tutorial, we'll see how to create a serverless function using the C# programming language and that function will connect to and query MongoDB Atlas in an efficient manner.",
"contentType": "Tutorial"
} | Interact with MongoDB Atlas in an AWS Lambda Function Using C# | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-separating-data | created | # Separating Data That is Accessed Together
We're breezing through the MongoDB schema design anti-patterns. So far in this series, we've discussed four of the six anti-patterns:
- Massive arrays
- Massive number of collections
- Unnecessary indexes
- Bloated documents
Normalizing data and splitting it into different pieces to optimize for space and reduce data duplication can feel like second nature to those with a relational database background. However, separating data that is frequently accessed together is actually an anti-pattern in MongoDB. In this post, we'll find out why and discuss what you should do instead.
>:youtube]{vid=dAN76_47WtA t=15}
>
>If you prefer to learn by video (or you just like hearing me repeat, "Data that is accessed together should be stored together"), watch the video above.
## Separating Data That is Accessed Together
Much like you would use a `join` to combine information from different tables in a relational database, MongoDB has a [$lookup operation that allows you to join information from more than one collection. `$lookup` is great for infrequent, rarely used operations or analytical queries that can run overnight without a time limit. However, `$lookup` is not so great when you're frequently using it in your applications. Why?
`$lookup` operations are slow and resource-intensive compared to operations that don't need to combine data from more than one collection.
The rule of thumb when modeling your data in MongoDB is:
>Data that is accessed together should be stored together.
Instead of separating data that is frequently used together between multiple collections, leverage embedding and arrays to keep the data together in a single collection.
For example, when modeling a one-to-one relationship, you can embed a document from one collection as a subdocument in a document from another. When modeling a one-to-many relationship, you can embed information from multiple documents in one collection as an array of documents in another.
Keep in mind the other anti-patterns we've already discussed as you begin combining data from different collections together. Massive, unbounded arrays and bloated documents can both be problematic.
If combining data from separate collections into a single collection will result in massive, unbounded arrays or bloated documents, you may want to keep the collections separate and duplicate some of the data that is used frequently together in both collections. You could use the Subset Pattern to duplicate a subset of the documents from one collection in another. You could also use the Extended Reference Pattern to duplicate a portion of the data in each document from one collection in another. In both patterns, you have the option of creating references between the documents in both collections. Keep in mind that whenever you need to combine information from both collections, you'll likely need to use `$lookup`. Also, whenever you duplicate data, you are responsible for ensuring the duplicated data stays in sync.
As we have said throughout this series, each use case is different. As you model your schema, carefully consider how you will be querying the data and what the data you will be storing will realistically look like.
## Example
What would an Anti-Pattern post be without an example from Parks and Recreation? I don't even want to think about it. So let's return to Leslie.
Leslie decides to organize a Model United Nations for local high school students and recruits some of her coworkers to participate as well. Each participant will act as a delegate for a country during the event. She assigns Andy and Donna to be delegates for Finland.
Leslie decides to store information related to the Model United Nations in a MongoDB database. She wants to store the following information in her database:
- Basic stats about each country
- A list of resources that each country has available to trade
- A list of delegates for each country
- Policy statements for each country
- Information about each Model United Nations event she runs
With this information, she wants to be able to quickly generate the following reports:
- A country report that contains basic stats, resources currently available to trade, a list of delegates, the names and dates of the last five policy documents, and a list of all of the Model United Nations events in which this country has participated
- An event report that contains information about the event and the names of the countries who participated
The Model United Nations event begins, and Andy is excited to participate. He decides he doesn't want any of his country's "boring" resources, so he begins trading with other countries in order to acquire all of the world's lions.
Leslie decides to create collections for each of the categories of information she needs to store in her database. After Andy is done trading, Leslie has documents like the following.
``` javascript
// Countries collection
{
"_id": "finland",
"official_name": "Republic of Finland",
"capital": "Helsinki",
"languages":
"Finnish",
"Swedish",
"Sámi"
],
"population": 5528737
}
```
``` javascript
// Resources collection
{
"_id": ObjectId("5ef0feeb0d9314ac117d2034"),
"country_id": "finland",
"lions": 32563,
"military_personnel": 0,
"pulp": 0,
"paper": 0
}
```
``` javascript
// Delegates collection
{
"_id": ObjectId("5ef0ff480d9314ac117d2035"),
"country_id": "finland",
"first_name": "Andy",
"last_name": "Fryer"
},
{
"_id": ObjectId("5ef0ff710d9314ac117d2036"),
"country_id": "finland",
"first_name": "Donna",
"last_name": "Beagle"
}
```
``` javascript
// Policies collection
{
"_id": ObjectId("5ef34ec43e5f7febbd3ed7fb"),
"date-created": ISODate("2011-11-09T04:00:00.000+00:00"),
"status": "draft",
"title": "Country Defense Policy",
"country_id": "finland",
"policy": "Finland has formally decided to use lions in lieu of military for all self defense..."
}
```
``` javascript
// Events collection
{
"_id": ObjectId("5ef34faa3e5f7febbd3ed7fc"),
"event-date": ISODate("2011-11-10T05:00:00.000+00:00"),
"location": "Pawnee High School",
"countries": [
"Finland",
"Denmark",
"Peru",
"The Moon"
],
"topic": "Global Food Crisis",
"award-recipients": [
"Allison Clifford",
"Bob Jones"
]
}
```
When Leslie wants to generate a report about Finland, she has to use `$lookup` to combine information from all five collections. She wants to optimize her database performance, so she decides to leverage embedding to combine information from her five collections into a single collection.
Leslie begins working on improving her schema incrementally. As she looks at her schema, she realizes that she has a one-to-one relationship between documents in her `Countries` collection and her `Resources` collection. She decides to embed the information from the `Resources` collection as sub-documents in the documents in her `Countries` collection.
Now the document for Finland looks like the following.
``` javascript
// Countries collection
{
"_id": "finland",
"official_name": "Republic of Finland",
"capital": "Helsinki",
"languages": [
"Finnish",
"Swedish",
"Sámi"
],
"population": 5528737,
"resources": {
"lions": 32563,
"military_personnel": 0,
"pulp": 0,
"paper": 0
}
}
```
As you can see above, she has kept the information about resources together as a sub-document in her document for Finland. This is an easy way to keep data organized.
She has no need for her `Resources` collection anymore, so she deletes it.
At this point, she can retrieve information about a country and its resources without having to use `$lookup`.
Leslie continues analyzing her schema. She realizes she has a one-to-many relationship between countries and delegates, so she decides to create an array named `delegates` in her `Countries` documents. Each `delegates` array will store objects with delegate information. Now her document for Finland looks like the following:
``` javascript
// Countries collection
{
"_id": "finland",
"official_name": "Republic of Finland",
"capital": "Helsinki",
"languages": [
"Finnish",
"Swedish",
"Sámi"
],
"population": 5528737,
"resources": {
"lions": 32563,
"military_personnel": 0,
"pulp": 0,
"paper": 0
},
"delegates": [
{
"first_name": "Andy",
"last_name": "Fryer"
},
{
"first_name": "Donna",
"last_name": "Beagle"
}
]
}
```
Leslie feels confident about storing the delegate information in her country documents since each country will have only a handful of delegates (meaning her array won't grow infinitely), and she won't be frequently accessing information about the delegates separately from their associated countries.
Leslie no longer needs her `Delegates` collection, so she deletes it.
Leslie continues optimizing her schema and begins looking at her `Policies` collection. She has a one-to-many relationship between countries and policies. She needs to include the titles and dates of each country's five most recent policy documents in her report. She considers embedding the policy documents in her country documents, but the documents could quickly become quite large based on the length of the policies. She doesn't want to fall into the trap of the [Bloated Documents Anti-Pattern, but she also wants to avoid using `$lookup` every time she runs a report.
Leslie decides to leverage the Subset Pattern. She stores the titles and dates of the five most recent policy documents in her country document. She also creates a reference to the policy document, so she can easily gather all of the information for each policy when needed. She leaves her `Policies` collection as-is. She knows she'll have to maintain some duplicate information between the documents in the `Countries` collection and the `Policies` collection, but she decides duplicating a little bit of information is a good tradeoff to ensure fast queries.
Her document for Finland now looks like the following:
``` javascript
// Countries collection
{
"_id": "finland",
"official_name": "Republic of Finland",
"capital": "Helsinki",
"languages":
"Finnish",
"Swedish",
"Sámi"
],
"population": 5528737,
"resources": {
"lions": 32563,
"military_personnel": 0,
"pulp": 0,
"paper": 0
},
"delegates": [
{
"first_name": "Andy",
"last_name": "Fryer"
},
{
"first_name": "Donna",
"last_name": "Beagle"
}
],
"recent-policies": [
{
"_id": ObjectId("5ef34ec43e5f7febbd3ed7fb"),
"date-created": ISODate("2011-11-09T04:00:00.000+00:00"),
"title": "Country Defense Policy"
},
{
"_id": ObjectId("5ef357bb3e5f7febbd3ed7fd"),
"date-created": ISODate("2011-11-10T04:00:00.000+00:00"),
"title": "Humanitarian Food Policy"
}
]
}
```
Leslie continues examining her query for her report on each country. The last `$lookup` she has combines information from the `Countries` collection and the `Events` collection. She has a many-to-many relationship between countries and events. She needs to be able to quickly generate reports on each event as a whole, so she wants to keep the `Events` collection separate. She decides to use the [Extended Reference Pattern to solve her dilemma. She includes the information she needs about each event in her country documents and maintains a reference to the complete event document, so she can get more information when she needs to. She will duplicate the event date and event topic in both the `Countries` and `Events` collections, but she is comfortable with this as that data is very unlikely to change.
After all of her updates, her document for Finland now looks like the following:
``` javascript
// Countries collection
{
"_id": "finland",
"official_name": "Republic of Finland",
"capital": "Helsinki",
"languages":
"Finnish",
"Swedish",
"Sámi"
],
"population": 5528737,
"resources": {
"lions": 32563,
"military_personnel": 0,
"pulp": 0,
"paper": 0
},
"delegates": [
{
"first_name": "Andy",
"last_name": "Fryer"
},
{
"first_name": "Donna",
"last_name": "Beagle"
}
],
"recent-policies": [
{
"policy-id": ObjectId("5ef34ec43e5f7febbd3ed7fb"),
"date-created": ISODate("2011-11-09T04:00:00.000+00:00"),
"title": "Country Defense Policy"
},
{
"policy-id": ObjectId("5ef357bb3e5f7febbd3ed7fd"),
"date-created": ISODate("2011-11-10T04:00:00.000+00:00"),
"title": "Humanitarian Food Policy"
}
],
"events": [
{
"event-id": ObjectId("5ef34faa3e5f7febbd3ed7fc"),
"event-date": ISODate("2011-11-10T05:00:00.000+00:00"),
"topic": "Global Food Crisis"
},
{
"event-id": ObjectId("5ef35ac93e5f7febbd3ed7fe"),
"event-date": ISODate("2012-02-18T05:00:00.000+00:00"),
"topic": "Pandemic"
}
]
}
```
## Summary
Data that is accessed together should be stored together. If you'll be frequently reading or updating information together, consider storing the information together using nested documents or arrays. Carefully consider your use case and weigh the benefits and drawbacks of data duplication as you bring data together.
Be on the lookout for a post on the final MongoDB schema design anti-pattern!
>When you're ready to build a schema in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.
## Related Links
Check out the following resources for more information:
- MongoDB Docs: Reduce $lookup Operations
- MongoDB Docs: Data Model Design
- MongoDB Docs: Model One-to-One Relationships with Embedded Documents
- MongoDB Docs: Model One-to-Many Relationships with Embedded Documents
- MongoDB University M320: Data Modeling
- Blog Post: The Subset Pattern
- Blog Post: The Extended Reference Pattern
- Blog Series: Building with Patterns | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Separating Data That is Accessed Together",
"contentType": "Article"
} | Separating Data That is Accessed Together | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-api-aws-lambda | created | # Creating an API with the AWS API Lambda and the Atlas Data API
## Introduction
This article will walk through creating an API using the Amazon API Gateway in front of the MongoDB Atlas Data API. When integrating with the Amazon API Gateway, it is possible but undesirable to use a driver, as drivers are designed to be long-lived and maintain connection pooling. Using serverless functions with a driver can result in either a performance hit – if the driver is instantiated on each call and must authenticate – or excessive connection numbers if the underlying mechanism persists between calls, as you have no control over when code containers are reused or created.
TheMongoDB Atlas Data API is an HTTPS-based API that allows us to read and write data in Atlas where a MongoDB driver library is either not available or not desirable. For example, when creating serverless microservices with MongoDB.
AWS (Amazon Web Services) describe their API Gateway as:
> "A fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
> API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales."
## Prerequisites.
A core requirement for this walkthrough is to have an Amazon Web Services account, the API Gateway is available as part of the AWS free tier, allowing up to 1 million API calls per month, at no charge, in your first 12 months with AWS.
We will also need an Atlas Cluster for which we have enabled the Data API – and our endpoint URL and API Key. You can learn how to get these in this Article or this Video if you do not have them already.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
A common use of Atlas with the Amazon API Gateway might be to provide a managed API to a restricted subset of data in our cluster, which is a common need for a microservice architecture. To demonstrate this, we first need to have some data available in MongoDB Atlas. This can be added by selecting the three dots next to our cluster name and choosing "Load Sample Dataset", or following instructions here.
## Creating an API with the Amazon API Gateway and the Atlas Data API
The instructions here are an extended variation from Amazon's own "Getting Started with the API Gateway" tutorial. I do not presume to teach you how best to use Amazon's API Gateway as Amazon itself has many fine resources for this, what we will do here is use it to get a basic Public API enabled that uses the Data API.
> The Data API itself is currently in an early preview with a flat security model allowing all users who have an API key to query or update any database or collection. Future versions will have more granular security. We would not want to simply expose the current data API as a 'Public' API but we can use it on the back-end to create more restricted and specific access to our data.
>
We are going to create an API which allows users to GET the ten films for any given year which received the most awards - a notional "Best Films of the Year". We will restrict this API to performing only that operation and supply the year as part of the URL
We will first create the API, then analyze the code we used for it.
## Create a AWS Lambda Function to retrieve data with the Data API
1. Sign in to the Lambda console at https://console.aws.amazon.com/lambda.
2. Choose **Create function**.
3. For **Function name**, enter top-movies-for-year.
4. Choose **Create function**.
When you see the Javascript editor that looks like this
Replace the code with the following, changing the API-KEY and APP-ID to the values for your Atlas cluster. Save and click **Deploy** (In a production application you might look to store these in AWS Secrets manager , I have simplified by putting them in the code here).
```
const https = require('https');
const atlasEndpoint = "/app/APP-ID/endpoint/data/beta/action/find";
const atlasAPIKey = "API-KEY";
exports.handler = async(event) => {
if (!event.queryStringParameters || !event.queryStringParameters.year) {
return { statusCode: 400, body: 'Year not specified' };
}
//Year is a number but the argument is a string so we need to convert as MongoDB is typed
let year = parseInt(event.queryStringParameters.year, 10);
console.log(`Year = ${year}`)
if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }
const payload = JSON.stringify({
dataSource: "Cluster0",
database: "sample_mflix",
collection: "movies",
filter: { year },
projection: { _id: 0, title: 1, awards: "$awards.wins" },
sort: { "awards.wins": -1 },
limit: 10
});
const options = {
hostname: 'data.mongodb-api.com',
port: 443,
path: atlasEndpoint,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': payload.length,
'api-key': atlasAPIKey
}
};
let results = '';
const response = await new Promise((resolve, reject) => {
const req = https.request(options, res => {
res.on('data', d => {
results += d;
});
res.on('end', () => {
console.log(`end() status code = ${res.statusCode}`);
if (res.statusCode == 200) {
let resultsObj = JSON.parse(results)
resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });
}
else {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key
}
});
});
//Do not give the user clues about backend issues for security reasons
req.on('error', error => {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable
});
req.write(payload);
req.end();
});
return response;
};
```
Alternatively, if you are familiar with working with packages and Lambda, you could upload an HTTP package like Axios to Lambda as a zipfile, allowing you to use the following simplified code.
```
const axios = require('axios');
const atlasEndpoint = "https://data.mongodb-api.com/app/APP-ID/endpoint/data/beta/action/find";
const atlasAPIKey = "API-KEY";
exports.handler = async(event) => {
if (!event.queryStringParameters || !event.queryStringParameters.year) {
return { statusCode: 400, body: 'Year not specified' };
}
//Year is a number but the argument is a string so we need to convert as MongoDB is typed
let year = parseInt(event.queryStringParameters.year, 10);
console.log(`Year = ${year}`)
if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }
const payload = {
dataSource: "Cluster0",
database: "sample_mflix",
collection: "movies",
filter: { year },
projection: { _id: 0, title: 1, awards: "$awards.wins" },
sort: { "awards.wins": -1 },
limit: 10
};
try {
const response = await axios.post(atlasEndpoint, payload, { headers: { 'api-key': atlasAPIKey } });
return response.data.documents;
}
catch (e) {
return { statusCode: 500, body: 'Unable to service request' }
}
};
```
## Create an HTTP endpoint for our custom API function
We now need to route an HTTP endpoint to our Lambda function using the HTTP API.
The HTTP API provides an HTTP endpoint for your Lambda function. API Gateway routes requests to your Lambda function, and then returns the function's response to clients.
1. Go to the API Gateway console at https://console.aws.amazon.com/apigateway.
2. Do one of the following:
To create your first API, for HTTP API, choose **Build**.
If you've created an API before, choose **Create API**, and then choose **Build** for HTTP API.
3. For Integrations, choose **Add integration**.
4. Choose **Lambda**.
5. For **Lambda function**, enter top-movies-for-year.
6. For **API name**, enter movie-api.
8. Choose **Next**.
8. Review the route that API Gateway creates for you, and then choose **Next**.
9. Review the stage that API Gateway creates for you, and then choose **Next**.
10. Choose **Create**.
Now you've created an HTTP API with a Lambda integration and the Atlas Data API that's ready to receive requests from clients.
## Test your API
You should now be looking at API Gateway details that look like this, if not you can get to it by going tohttps://console.aws.amazon.com/apigatewayand clicking on **movie-api**
Take a note of the **Invoke URL**, this is the base URL for your API
Now, in a new browser tab, browse to `/top-movies-for-year?year=2001` . Changing ` `to the Invoke URL shown in AWS. You should see the results of your API call - JSON listing the top 10 "Best" films of 2001.
## Reviewing our Function.
We start by importing the Standard node.js https library - the Data API needs no special libraries to call it. We also define our API Key and the path to our find endpoint, You get both of these from the Data API tab in Atlas.
```
const https = require('https');
const atlasEndpoint = "/app/data-amzuu/endpoint/data/beta/action/find";
const atlasAPIKey = "YOUR-API-KEY";
```
Now we check that the API call included a parameter for year and that it's a number - we need to convert it to a number as in MongoDB, "2001" and 2001 are different values, and searching for one will not find the other. The collection uses a number for the movie release year.
```
exports.handler = async (event) => {
if (!event.queryStringParameters || !event.queryStringParameters.year) {
return { statusCode: 400, body: 'Year not specified' };
}
//Year is a number but the argument is a string so we need to convert as MongoDB is typed
let year = parseInt(event.queryStringParameters.year, 10);
console.log(`Year = ${year}`)
if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }
const payload = JSON.stringify({
dataSource: "Cluster0", database: "sample_mflix", collection: "movies",
filter: { year }, projection: { _id: 0, title: 1, awards: "$awards.wins" }, sort: { "awards.wins": -1 }, limit: 10
});
```
Then we construct our payload - the parameters for the Atlas API Call, we are querying for year = year, projecting just the title and the number of awards, sorting by the numbers of awards descending and limiting to 10.
```
const payload = JSON.stringify({
dataSource: "Cluster0", database: "sample_mflix", collection: "movies",
filter: { year }, projection: { _id: 0, title: 1, awards: "$awards.wins" },
sort: { "awards.wins": -1 }, limit: 10
});
```
We then construct the options for the HTTPS POST request to the Data API - here we pass the Data API API-KEY as a header.
```
const options = {
hostname: 'data.mongodb-api.com',
port: 443,
path: atlasEndpoint,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': payload.length,
'api-key': atlasAPIKey
}
};
```
Finally we use some fairly standard code to call the API and handle errors. We can get Request errors - such as being unable to contact the server - or Response errors where we get any Response code other than 200 OK - In both cases we return a 500 Internal error from our simplified API to not leak any details of the internals to a potential hacker.
```
let results = '';
const response = await new Promise((resolve, reject) => {
const req = https.request(options, res => {
res.on('data', d => {
results += d;
});
res.on('end', () => {
console.log(`end() status code = ${res.statusCode}`);
if (res.statusCode == 200) {
let resultsObj = JSON.parse(results)
resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });
} else {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key
}
});
});
//Do not give the user clues about backend issues for security reasons
req.on('error', error => {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable
});
req.write(payload);
req.end();
});
return response;
};
```
Our Axios verison is just the same functionality as above but simplified by the use of a library.
## Conclusion
As we can see, calling the Atlas Data API from AWS Lambda function is incredibly simple, especially if making use of a library like Axios. The Data API is also stateless, so there are no concerns about connection setup times or maintaining long lived connections as there would be using a Driver. | md | {
"tags": [
"Atlas",
"JavaScript",
"AWS"
],
"pageDescription": "In this article we look at how the Atlas Data API is a great choice for accessing MongoDB Atlas from AWS Lambda Functions by creating a custom API with the AWS API Gateway. ",
"contentType": "Tutorial"
} | Creating an API with the AWS API Lambda and the Atlas Data API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/end-to-end-test-realm-serverless-apps | created | # How to Write End-to-End Tests for MongoDB Realm Serverless Apps
As of June 2022, the functionality previously known as MongoDB Realm is now named Atlas App Services. Atlas App Services refers to the cloud services that simplify building applications with Atlas – Atlas Data API, Atlas GraphQL API, Atlas Triggers, and Atlas Device Sync. Realm will continue to be used to refer to the client-side database and SDKs. Some of the naming or references in this article may be outdated.
End-to-end tests are the cherry on top of a delicious ice cream sundae
of automated tests. Just like many people find cherries to be disgusting
(rightly so—cherries are gross!), many developers are not thrilled to
write end-to-end tests. These tests can be time consuming to write and
difficult to maintain. However, these tests can provide development
teams with confidence that the entire application is functioning as
expected.
Automated tests are like a delicious ice cream sundae.
Today I'll discuss how to write end-to-end tests for apps built using
MongoDB Realm.
This is the third post in the *DevOps + MongoDB Realm Serverless
Functions = 😍* blog series. I began the series by introducing the
Social Stats app, a serverless app I built using MongoDB Realm. I've
explained
how I wrote unit tests
and integration tests
for the app. If you haven't read
the first post where I explained what the app does and how I architected it,
I recommend you start there and then return to this post.
>
>
>Prefer to learn by video? Many of the concepts I cover in this series
>are available in this video.
>
>
## Writing End-to-End Tests for MongoDB Realm Serverless Apps
Today I'll focus on the top layer of the testing
pyramid:
end-to-end tests. End-to-end tests work through a complete scenario a
user would take while using the app. These tests typically interact with
the user interface (UI), clicking buttons and inputting text just as a
user would. End-to-end tests ultimately check that the various
components and systems that make up the app are configured and working
together correctly.
Because end-to-end tests interact with the UI, they tend to be very
brittle; they break easily as the UI changes. These tests can also be
challenging to write. As a result, developers typically write very few
of these tests.
Despite their brittle nature, having end-to-end tests is still
important. These tests give development teams confidence that the app is
functioning as expected.
### Sidenote
I want to pause and acknowledge something before the Internet trolls
start sending me snarky DMs.
This section is titled *writing end-to-end tests for MongoDB Realm
serverless apps*. To be clear, none of the approaches I'm sharing in
this post about writing end-to-end tests are specific to MongoDB Realm
serverless apps. When you write end-to-end tests that interact with the
UI, the underlying architecture is irrelevant. I know this. Please keep
your angry Tweets to yourself.
I decided to write this post, because writing about only two-thirds of
the testing pyramid just seemed wrong. Now let's continue.
### Example End-to-End Test
Let's walk through how I wrote an end-to-test for the Social Stats app.
I began with the simplest flow:
1. A user navigates to the page where they can upload their Twitter
statistics.
2. The user uploads a Twitter statistics spreadsheet that has stats for
a single Tweet.
3. The user navigates to the dashboard so they can see their
statistics.
I decided to build my end-to-end tests using Jest
and Selenium. Using Jest was a
straightforward decision as I had already built my unit and integration
tests using it. Selenium has been a popular choice for automating
browser interactions for many years. I've used it successfully in the
past, so using it again was an easy choice.
I created a new file named `uploadTweetStats.test.js`. Then I started
writing the typical top-of-the-file code.
I began by importing several constants. I imported the MongoClient so
that I would be able to interact directly with my database, I imported
several constants I would need in order to use Selenium, and I imported
the names of the database and collection I would be testing later.
``` javascript
const { MongoClient } = require('mongodb');
const { Builder, By, until, Capabilities } = require('selenium-webdriver');
const { TwitterStatsDb, statsCollection } = require('../constants.js');
```
Then I declared some variables.
``` javascript
let collection;
let mongoClient;
let driver;
```
Next, I created constants for common XPaths I would need to reference
throughout my tests.
XPath
is a query language you can use to select nodes in HTML documents.
Selenium provides a variety of
ways—including
XPaths—for you to select elements in your web app. The constants below
are the XPaths for the nodes with the text "Total Engagements" and
"Total Impressions."
``` javascript
const totalEngagementsXpath = "//*text()='Total Engagements']";
const totalImpressionsXpath = "//*[text()='Total Impressions']";
```
Now that I had all of my top-of-the-file code written, I was ready to
start setting up my testing structure. I began by implementing the
[beforeAll()
function, which Jest runs once before any of the tests in the file are
run.
Browser-based tests can run a bit slower than other automated tests, so
I increased the timeout for each test to 30 seconds.
Then,
just as I did with the integration tests, I
connected directly to the test database.
``` javascript
beforeAll(async () => {
jest.setTimeout(30000);
// Connect directly to the database
const uri = `mongodb+srv://${process.env.DB_USERNAME}:${process.env.DB_PASSWORD}@${process.env.CLUSTER_URI}/test?retryWrites=true&w=majority`;
mongoClient = new MongoClient(uri);
await mongoClient.connect();
collection = mongoClient.db(TwitterStatsDb).collection(statsCollection);
});
```
Next, I implemented the
beforeEach()
function, which Jest runs before each test in the file.
I wanted to ensure that the collection the tests will be interacting
with is empty before each test, so I added a call to delete everything
in the collection.
Next, I configured the browser the tests will use. I chose to use
headless Chrome, meaning that a browser UI will not actually be
displayed. Headless browsers provide many
benefits
including increased performance. Selenium supports a variety of
browsers,
so you can choose to use whatever browser combinations you'd like.
I used the configurations for Chrome when I created a new
WebDriver
stored in `driver`. The `driver` is what will control the browser
session.
``` javascript
beforeEach(async () => {
// Clear the database
const result = await collection.deleteMany({});
// Create a new driver using headless Chrome
let chromeCapabilities = Capabilities.chrome();
var chromeOptions = {
'args': '--headless', 'window-size=1920,1080']
};
chromeCapabilities.set('chromeOptions', chromeOptions);
driver = new Builder()
.forBrowser('chrome')
.usingServer('http://localhost:4444/wd/hub')
.withCapabilities(chromeCapabilities)
.build();
});
```
I wanted to ensure the browser session was closed after each test, so I
added a call to do so in
[afterEach().
``` javascript
afterEach(async () => {
driver.close();
})
```
Lastly, I wanted to ensure that the database connection was closed after
all of the tests finished running, so I added a call to do so in
afterAll().
``` javascript
afterAll(async () => {
await mongoClient.close();
})
```
Now that I had all of my test structure code written, I was ready to
begin writing the code to interact with elements in my browser. I
quickly discovered that I would need to repeat a few actions in multiple
tests, so I wrote functions specifically for those.
- refreshChartsDashboard():
This function clicks the appropriate buttons to manually refresh the
data in the dashboard.
- moveToCanvasOfElement(elementXpath):
This function moves the mouse to the chart canvas associated with
the node identified by `elementXpath`. This function will come in
handy for verifying elements in charts.
- verifyChartText(elementXpath,
chartText):
This function verifies that when you move the mouse to the chart
canvas associated with the node identified by `elementXpath`, the
`chartText` is displayed in the tooltip.
Finally, I was ready to write my first test case that tests uploading a
CSV file with Twitter statistics for a single Tweet.
``` javascript
test('Single tweet', async () => {
await driver.get(`${process.env.URL}`);
const button = await driver.findElement(By.id('csvUpload'));
await button.sendKeys(process.cwd() + "/tests/ui/files/singletweet.csv");
const results = await driver.findElement(By.id('results'));
await driver.wait(until.elementTextIs(results, `Fabulous! 1 new Tweet(s) was/were saved.`), 10000);
const dashboardLink = await driver.findElement(By.id('dashboard-link'));
dashboardLink.click();
await refreshChartsDashboard();
await verifyChartText(totalEngagementsXpath, "4");
await verifyChartText(totalImpressionsXpath, "260");
})
```
Let's walk through what this test is doing.
Screen recording of the Single tweet test when run in Chrome
The test begins by navigating to the URL for the application I'm using
for testing.
Then the test clicks the button that allows users to browse for a file
to upload. The test selects a file and chooses to upload it.
The test asserts that the page displays a message indicating that the
upload was successful.
Then the test clicks the link to open the dashboard. In case the charts
in the dashboard have stale data, the test clicks the buttons to
manually force the data to be refreshed.
Finally, the test verifies that the correct number of engagements and
impressions are displayed in the charts.
After I finished this test, I wrote another end-to-end test. This test
verifies that uploading CSV files that update the statistics on existing
Tweets as well as uploading CSV files for multiple authors all work as
expected.
You can find the full test file with both end-to-end tests in
storeCsvInDB.test.js.
## Wrapping Up
You now know the basics of how to write automated tests for Realm
serverless apps.
The Social Stats application source code and associated test files are
available in a GitHub repo:
. The repo's readme
has detailed instructions on how to execute the test files.
While writing and maintaining end-to-end tests can sometimes be painful,
they are an important piece of the testing pyramid. Combined with the
other automated tests, end-to-end tests give the development team
confidence that the app is ready to be deployed.
Now that you have a strong foundation of automated tests, you're ready
to dive into automated deployments. Be on the lookout for the next post
in this series where I'll explain how to craft a CI/CD pipeline for
Realm serverless apps.
## Related Links
Check out the following resources for more information:
- GitHub Repository: Social
Stats
- Video: DevOps + MongoDB Realm Serverless Functions =
😍
- Documentation: MongoDB Realm
- MongoDB Atlas
- MongoDB Charts
| md | {
"tags": [
"Realm",
"Serverless"
],
"pageDescription": "Learn how to write end-to-end tests for MongoDB Realm Serverless Apps.",
"contentType": "Tutorial"
} | How to Write End-to-End Tests for MongoDB Realm Serverless Apps | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/performance-tuning-tips | created | # MongoDB Performance Tuning Questions
Most of the challenges related to keeping a MongoDB cluster running at
top speed can be addressed by asking a small number of fundamental
questions and then using a few crucial metrics to answer them.
By keeping an eye on the metrics related to query performance, database
performance, throughput, resource utilization, resource saturation, and
other critical "assertion" errors it's possible to find problems that
may be lurking in your cluster. Early detection allows you to stay ahead
of the game, resolving issues before they affect performance.
These fundamental questions apply no matter how MongoDB is used, whether
through MongoDB Atlas, the
managed service available on all major cloud providers, or through
MongoDB Community or Enterprise editions, which are run in a
self-managed manner on-premise or in the cloud.
Each type of MongoDB deployment can be used to support databases at
scale with immense transaction volumes and that means performance tuning
should be a constant activity.
But the good news is that the same metrics are used in the tuning
process no matter how MongoDB is used.
However, as we'll see, the tuning process is much easier in the cloud
using MongoDB Atlas where
everything is more automatic and prefabricated.
Here are the key questions you should always be asking about MongoDB
performance tuning and the metrics that can answer them.
## Are all queries running at top speed?
Query problems are perhaps the lowest hanging fruit when it comes to
debugging MongoDB performance issues. Finding problems and fixing them
is generally straightforward. This section covers the metrics that can
reveal query performance problems and what to do if you find slow
queries.
**Slow Query Log.** The time elapsed and the method used to execute each
query is captured in MongoDB log files, which can be searched for slow
queries. In addition, queries over a certain threshold can be logged
explicitly by the MongoDB Database
Profiler.
- When a query is slow, first look to see if it was a collection scan
rather than an index
scan.
- Collection scans means all documents in a collection must be
read.
- Index scans limit the number of documents that must be
inspected.
- Consider adding an index when you see a lot of collection
scans.
- But remember: indexes have a cost when it comes to writes and
updates. Too many indexes that are underutilized can slow down the
modification or insertion of new documents. Depending on the nature
of your workloads, this may or may not be a problem.
**Scanned vs Returned** is a metric that can be found in Cloud
Manager
and in MongoDB Atlas that
indicates how many documents had to be scanned in order to return the
documents meeting the query.
- In the absence of indexes, a rarely met ideal for this ratio is 1/1,
meaning all documents scanned were returned — no wasted scans. Most
of the time however, when scanning is done, documents are scanned
that are not returned meaning the ratio is greater than 1.
- When indexes are used, this ratio can be less than 1 or even 0,
meaning you have a covered
query.
When no documents needed to be scanned, producing a ratio of 0, that
means all the data needed was in the index.
- Scanning huge amounts of documents is inefficient and could indicate
problems regarding missing indexes or indicate a need for query
optimization.
**Scan and Order** is an index related metric that can be found in Cloud
Manager and MongoDB Atlas.
- A high Scan and Order number, say 20 or more, indicates that the
server is having to sort query results to get them in the right
order. This takes time and increases the memory load on the server.
- Fix this by making sure indexes are sorted in the order in which the
queries need the documents, or by adding missing indexes.
**WiredTiger Ticket Number** is a key indicator of the performance of
the WiredTiger
storage engine, which, since release 3.2, has been the storage engine
for MongoDB.
- WiredTiger has a concept of read or write tickets that are created
when the database is accessed. The WiredTiger ticket number should
always be at 128.
- If the value goes below 128 and stays below that number, that means
the server is waiting on something and it's an indication of a
problem.
- The remedy is then to find the operations that are going too slowly
and start a debugging process.
- Deployments of MongoDB using releases older than 3.2 will certainly
get a performance boost from migrating to a later version that uses
WiredTiger.
**Document Structure Antipatterns** aren't revealed by a metric but can
be something to look for when debugging slow queries. Here are two of
the most notorious bad practices that hurt performance.
**Unbounded arrays:** In a MongoDB document, if an array can grow
without a size limit, it could cause a performance problem because every
time you update the array, MongoDB has to rewrite the array into the
document. If the array is huge, this can cause a performance problem.
Learn more at Avoid Unbounded
Arrays
and Performance Best Practices: Query Patterns and
Profiling.
**Subdocuments without bounds:** The same thing can happen with respect
to subdocuments. MongoDB supports inserting documents within documents,
with up to 128 levels of nesting. Each MongoDB document, including
subdocuments, also has a size limit of 16MB. If the number of
subdocuments becomes excessive, performance problems may result.
One common fix to this problem is to move some or all of the
subdocuments to a separate collection and then refer to them from the
original document. You can learn more about this topic in
this blog post.
## Is the database performing at top speed?
MongoDB, like most advanced database systems, has thousands of metrics
that track all aspects of database performance which includes reading,
writing, and querying the database, as well as making sure background
maintenance tasks like backups don't gum up the works.
The metrics described in this section all indicate larger problems that
can have a variety of causes. Like a warning light on a dashboard, these
metrics are invaluable high-level indicators that help you start looking
for the causes before the database has a catastrophic failure.
>
>
>Note: Various ways to get access to all of these metrics are covered below in the Getting Access to Metrics and Setting Up Monitoring section.
>
>
**Replication lag** occurs when a secondary member of a replica set
falls behind the primary. A detailed examination of the OpLog related
metrics can help get to the bottom of the problems but the causes are
often:
- A networking issue between the primary and secondary, making nodes
unreachable
- A secondary node applying data slower than the primary node
- Insufficient write capacity in which case you should add more shards
- Slow operations on the primary node, blocking replication
**Locking performance** problems are indicated when the number of
available read or write tickets remaining reaches zero, which means new
read or write requests will be queued until a new read or write ticket
is available.
- MongoDB's internal locking system is used to support simultaneous
queries while avoiding write conflicts and inconsistent reads.
- Locking performance problems can indicate a variety of problems
including suboptimal indexes and poor schema design patterns, both
of which can lead to locks being held longer than necessary.
**Number of open cursors rising** without a corresponding growth of
traffic is often symptomatic of poorly indexed queries or the result of
long running queries due to large result sets.
- This metric can be another indicator that the kind of query
optimization techniques mentioned in the first section are in order.
## Is the cluster overloaded?
A large part of performance tuning is recognizing when your total
traffic, the throughput of transactions through the system, is rising
beyond the planned capacity of your cluster. By keeping track of growth
in throughput, it's possible to expand the capacity in an orderly
manner. Here are the metrics to keep track of.
**Read and Write Operations** is the fundamental metric that indicates
how much work is done by the cluster. The ratio of reads to writes is
highly dependent on the nature of the workloads running on the cluster.
- Monitoring read and write operations over time allows normal ranges
and thresholds to be established.
- As trends in read and write operations show growth in throughput,
capacity should be gradually increased.
**Document Metrics** and **Query Executor** are good indications of
whether the cluster is actually too busy. These metrics can be found in
Cloud Manager and in MongoDB
Atlas. As with read and write
operations, there is no right or wrong number for these metrics, but
having a good idea of what's normal helps you discern whether poor
performance is coming from large workload size or attributable to other
reasons.
- Document metrics are updated anytime you return a document or insert
a document. The more documents being returned, inserted, updated or
deleted, the busier your cluster is.
- Poor performance in a cluster that has plenty of capacity
usually points to query problems.
- The query executor tells how many queries are being processed
through two data points:
- Scanned - The average rate per second over the selected sample
period of index items scanned during queries and query-plan
evaluation.
- Scanned objects - The average rate per second over the selected
sample period of documents scanned during queries and query-plan
evaluation.
**Hardware and Network metrics** can be important indications that
throughput is rising and will exceed the capacity of computing
infrastructure. These metrics are gathered from the operating system and
networking infrastructure. To make these metrics useful for diagnostic
purposes, you must have a sense of what is normal.
- In MongoDB Atlas, or when
using Cloud Manager, these metrics are easily displayed. If you are
running on-premise, it depends on your operating system.
- There's a lot to track but at a minimum have a baseline range for
metrics like:
- Disk latency
- Disk IOPS
- Number of Connections
## Is the cluster running out of key resources?
A MongoDB cluster makes use of a variety of resources that are provided
by the underlying computing and networking infrastructure. These can be
monitored from within MongoDB as well as from outside of MongoDB at the
level of computing infrastructure as described in the previous section.
Here are the crucial resources that can be easily tracked from within
Mongo, especially through Cloud Manager and MongoDB
Atlas.
**Current number of client connections** is usually an effective metric
to indicate total load on a system. Keeping track of normal ranges at
various times of the day or week can help quickly identify spikes in
traffic.
- A related metric, percentage of connections used, can indicate when
MongoDB is getting close to running out of available connections.
**Storage metrics** track how MongoDB is using persistent storage. In
the WiredTiger storage engine, each collection is a file and so is each
index. When a document in a collection is updated, the entire document
is re-written.
- If memory space metrics (dataSize, indexSize, or storageSize) or the
number of objects show a significant unexpected change while the
database traffic stays within ordinary ranges, it can indicate a
problem.
- A sudden drop in dataSize may indicate a large amount of data
deletion, which should be quickly investigated if it was not
expected.
**Memory metrics** show how MongoDB is using the virtual memory of the
computing infrastructure that is hosting the cluster.
- An increasing number of page faults or a growing amount of dirty
data — data changed but not yet written to disk — can indicate
problems related to the amount of memory available to the cluster.
- Cache metrics can help determine if the working set is outgrowing
the available cache.
## Are critical errors on the rise?
MongoDB
asserts
are documents created, almost always because of an error, that are
captured as part of the MongoDB logging process.
- Monitoring the number of asserts created at various levels of
severity can provide a first level indication of unexpected
problems. Asserts can be message asserts, the most serious kind, or
warning assets, regular asserts, and user asserts.
- Examining the asserts can provide clues that may lead to the
discovery of problems.
## Getting Access to Metrics and Setting Up Monitoring
Making use of metrics is far easier if you know the data well: where it
comes from, how to get at it, and what it means.
As the MongoDB platform has evolved, it has become far easier to monitor
clusters and resolve common problems. In addition, the performance
tuning monitoring and analysis has become increasingly automated. For
example, MongoDB Atlas through
Performance Advisor will now suggest adding indexes if it detects a
query performance problem.
But it's best to know the whole story of the data, not just the pretty
graphs produced at the end.
## Data Sources for MongoDB Metrics
The sources for metrics used to monitor MongoDB are the logs created
when MongoDB is running and the commands that can be run inside of the
MongoDB system. These commands produce the detailed statistics that
describe the state of the system.
Monitoring MongoDB performance metrics
(WiredTiger)
contains an excellent categorization of the metrics available for
different purposes and the commands that can be used to get them. These
commands provide a huge amount of detailed information in raw form that
looks something like the following screenshot:
This information is of high quality but difficult to use.
## Monitoring Environments for MongoDB Metrics
As MongoDB has matured as a platform, specialized interfaces have been
created to bring together the most useful metrics.
- Ops Manager is a
management platform for on-premise and private cloud deployments of
MongoDB that includes extensive monitoring and alerting
capabilities.
- Cloud Manager is a
management platform for self-managed cloud deployments of MongoDB
that also includes extensive monitoring and alerting capabilities.
(Remember this screenshot reflects the user interface at the time of
writing.)
- Real Time Performance
Panel,
part of MongoDB Atlas or
MongoDB Ops Manager (requires MongoDB Enterprise Advanced
subscription), provides graph or table views of dozens of metrics
and is a great way to keep track of many aspects of performance,
including most of the metrics discussed earlier.
- Commercial products like New Relic, Sumo
Logic, and
DataDog all provide interfaces
designed for monitoring and alerting on MongoDB clusters. A variety
of open source platforms such as
mtools can be used as well.
## Performance Management Tools for MongoDB Atlas
MongoDB Atlas has taken advantage
of the standardized APIs and massive amounts of data available on cloud
platforms to break new ground in automating performance tuning. Also, in
addition to the Real Time Performance
Panel
mentioned above, the Performance
Advisor for
MongoDB Atlas analyzes queries
that you are actually making on your data, determines what's slow and
what's not, and makes recommendations for when to add indexes that take
into account the indexes already in use.
## The Professional Services Option
In a sense, the questions covered in this article represent a playbook
for running a performance tuning process. If you're already running such
a process, perhaps some new ideas have occurred to you based on the
analysis.
Resources like this article can help you achieve or refine your goals if
you know the questions to ask and some methods to get there. But if you
don't know the questions to ask or the best steps to take, it's wise to
avoid trial and error and ask someone with experience. With broad
expertise in tuning large MongoDB deployments, professional
services can help identify
the most effective steps to take to improve performance right away.
Once any immediate issues are resolved, professional services can guide
you in creating an ongoing streamlined performance tuning process to
keep an eye on and action the metrics important to your deployment.
## Wrap Up
We hope this article has made it clear that with a modest amount of
effort, it's possible to keep your MongoDB cluster in top shape. No
matter what types of workloads are running or where the deployment is
located, use the ideas and tools mentioned above to know what's
happening in your cluster and address performance problems before they
become noticeable or cause major outages.
>
>
>See the difference with MongoDB
>Atlas.
>
>Ready for Professional
>Services?
>
>
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Early detection of problems allows you to stay ahead of the game, resolving issues before they affect performance.",
"contentType": "Article"
} | MongoDB Performance Tuning Questions | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/getting-started-with-mongodb-and-mongoose | created | # Getting Started with MongoDB & Mongoose
In this article, we’ll learn how Mongoose, a third-party library for MongoDB, can help you to structure and access your data with ease.
## What is Mongoose?
Many who learn MongoDB get introduced to it through the very popular library, Mongoose. Mongoose is described as “elegant MongoDB object modeling for Node.js.”
Mongoose is an ODM (Object Data Modeling) library for MongoDB. While you don’t need to use an Object Data Modeling (ODM) or Object Relational Mapping (ORM) tool to have a great experience with MongoDB, some developers prefer them. Many Node.js developers choose to work with Mongoose to help with data modeling, schema enforcement, model validation, and general data manipulation. And Mongoose makes these tasks effortless.
> If you want to hear from the maintainer of Mongoose, Val Karpov, give this episode of the MongoDB Podcast a listen!
## Why Mongoose?
By default, MongoDB has a flexible data model. This makes MongoDB databases very easy to alter and update in the future. But a lot of developers are accustomed to having rigid schemas.
Mongoose forces a semi-rigid schema from the beginning. With Mongoose, developers must define a Schema and Model.
## What is a schema?
A schema defines the structure of your collection documents. A Mongoose schema maps directly to a MongoDB collection.
``` js
const blog = new Schema({
title: String,
slug: String,
published: Boolean,
author: String,
content: String,
tags: String],
createdAt: Date,
updatedAt: Date,
comments: [{
user: String,
content: String,
votes: Number
}]
});
```
With schemas, we define each field and its data type. Permitted types are:
* String
* Number
* Date
* Buffer
* Boolean
* Mixed
* ObjectId
* Array
* Decimal128
* Map
## What is a model?
Models take your schema and apply it to each document in its collection.
Models are responsible for all document interactions like creating, reading, updating, and deleting (CRUD).
> An important note: the first argument passed to the model should be the singular form of your collection name. Mongoose automatically changes this to the plural form, transforms it to lowercase, and uses that for the database collection name.
``` js
const Blog = mongoose.model('Blog', blog);
```
In this example, `Blog` translates to the `blogs` collection.
## Environment setup
Let’s set up our environment. I’m going to assume you have [Node.js installed already.
We’ll run the following commands from the terminal to get going:
```
mkdir mongodb-mongoose
cd mongodb-mongoose
npm init -y
npm i mongoose
npm i -D nodemon
code .
```
This will create the project directory, initialize, install the packages we need, and open the project in VS Code.
Let’s add a script to our `package.json` file to run our project. We will also use ES Modules instead of Common JS, so we’ll add the module `type` as well. This will also allow us to use top-level `await`.
``` js
...
"scripts": {
"dev": "nodemon index.js"
},
"type": "module",
...
```
## Connecting to MongoDB
Now we’ll create the `index.js` file and use Mongoose to connect to MongoDB.
``` js
import mongoose from 'mongoose'
mongoose.connect("mongodb+srv://:@cluster0.eyhty.mongodb.net/myFirstDatabase?retryWrites=true&w=majority")
```
You could connect to a local MongoDB instance, but for this article we are going to use a free MongoDB Atlas cluster. If you don’t already have an account, it's easy to sign up for a free MongoDB Atlas cluster here.
And if you don’t already have a cluster set up, follow our guide to get your cluster created.
After creating your cluster, you should replace the connection string above with your connection string including your username and password.
> The connection string that you copy from the MongoDB Atlas dashboard will reference the `myFirstDatabase` database. Change that to whatever you would like to call your database.
## Creating a schema and model
Before we do anything with our connection, we’ll need to create a schema and model.
Ideally, you would create a schema/model file for each schema that is needed. So we’ll create a new folder/file structure: `model/Blog.js`.
``` js
import mongoose from 'mongoose';
const { Schema, model } = mongoose;
const blogSchema = new Schema({
title: String,
slug: String,
published: Boolean,
author: String,
content: String,
tags: String],
createdAt: Date,
updatedAt: Date,
comments: [{
user: String,
content: String,
votes: Number
}]
});
const Blog = model('Blog', blogSchema);
export default Blog;
```
## Inserting data // method 1
Now that we have our first model and schema set up, we can start inserting data into our database.
Back in the `index.js` file, let’s insert a new blog article.
``` js
import mongoose from 'mongoose';
import Blog from './model/Blog';
mongoose.connect("mongodb+srv://mongo:mongo@cluster0.eyhty.mongodb.net/myFirstDatabase?retryWrites=true&w=majority")
// Create a new blog post object
const article = new Blog({
title: 'Awesome Post!',
slug: 'awesome-post',
published: true,
content: 'This is the best post ever',
tags: ['featured', 'announcement'],
});
// Insert the article in our MongoDB database
await article.save();
```
We first need to import the `Blog` model that we created. Next, we create a new blog object and then use the `save()` method to insert it into our MongoDB database.
Let’s add a bit more after that to log what is currently in the database. We’ll use the `findOne()` method for this.
``` js
// Find a single blog post
const firstArticle = await Blog.findOne({});
console.log(firstArticle);
```
Let’s run the code!
```
npm run dev
```
You should see the document inserted logged in your terminal.
> Because we are using `nodemon` in this project, every time you save a file, the code will run again. If you want to insert a bunch of articles, just keep saving. 😄
## Inserting data // method 2
In the previous example, we used the `save()` Mongoose method to insert the document into our database. This requires two actions: instantiating the object, and then saving it.
Alternatively, we can do this in one action using the Mongoose `create()` method.
``` js
// Create a new blog post and insert into database
const article = await Blog.create({
title: 'Awesome Post!',
slug: 'awesome-post',
published: true,
content: 'This is the best post ever',
tags: ['featured', 'announcement'],
});
console.log(article);
```
This method is much better! Not only can we insert our document, but we also get returned the document along with its `_id` when we console log it.
## Update data
Mongoose makes updating data very convenient too. Expanding on the previous example, let’s change the `title` of our article.
``` js
article.title = "The Most Awesomest Post!!";
await article.save();
console.log(article);
```
We can directly edit the local object, and then use the `save()` method to write the update back to the database. I don’t think it can get much easier than that!
## Finding data
Let’s make sure we are updating the correct document. We’ll use a special Mongoose method, `findById()`, to get our document by its ObjectId.
``` js
const article = await Blog.findById("62472b6ce09e8b77266d6b1b").exec();
console.log(article);
```
> Notice that we use the `exec()` Mongoose function. This is technically optional and returns a promise. In my experience, it’s better to use this function since it will prevent some head-scratching issues. If you want to read more about it, check out this note in the Mongoose docs about [promises.
There are many query options in Mongoose. View the full list of queries.
## Projecting document fields
Just like with the standard MongoDB Node.js driver, we can project only the fields that we need. Let’s only get the `title`, `slug`, and `content` fields.
``` js
const article = await Blog.findById("62472b6ce09e8b77266d6b1b", "title slug content").exec();
console.log(article);
```
The second parameter can be of type `Object|String|Array` to specify which fields we would like to project. In this case, we used a `String`.
## Deleting data
Just like in the standard MongoDB Node.js driver, we have the `deleteOne()` and `deleteMany()` methods.
``` js
const blog = await Blog.deleteOne({ author: "Jesse Hall" })
console.log(blog)
const blog = await Blog.deleteMany({ author: "Jesse Hall" })
console.log(blog)
```
## Validation
Notice that the documents we have inserted so far have not contained an `author`, dates, or `comments`. So far, we have defined what the structure of our document should look like, but we have not defined which fields are actually required. At this point any field can be omitted.
Let’s set some required fields in our `Blog.js` schema.
``` js
const blogSchema = new Schema({
title: {
type: String,
required: true,
},
slug: {
type: String,
required: true,
lowercase: true,
},
published: {
type: Boolean,
default: false,
},
author: {
type: String,
required: true,
},
content: String,
tags: String],
createdAt: {
type: Date,
default: () => Date.now(),
immutable: true,
},
updatedAt: Date,
comments: [{
user: String,
content: String,
votes: Number
}]
});
```
When including validation on a field, we pass an object as its value.
> `value: String` is the same as `value: {type: String}`.
There are several validation methods that can be used.
We can set `required` to true on any fields we would like to be required.
For the `slug`, we want the string to always be in lowercase. For this, we can set `lowercase` to true. This will take the slug input and convert it to lowercase before saving the document to the database.
For our `created` date, we can set the default buy using an arrow function. We also want this date to be impossible to change later. We can do that by setting `immutable` to true.
> Validators only run on the create or save methods.
## Other useful methods
Mongoose uses many standard MongoDB methods plus introduces many extra helper methods that are abstracted from regular MongoDB methods. Next, we’ll go over just a few of them.
### `exists()`
The `exists()` method returns either `null` or the ObjectId of a document that matches the provided query.
``` js
const blog = await Blog.exists({ author: "Jesse Hall" })
console.log(blog)
```
### `where()`
Mongoose also has its own style of querying data. The `where()` method allows us to chain and build queries.
``` js
// Instead of using a standard find method
const blogFind = await Blog.findOne({ author: "Jesse Hall" });
// Use the equivalent where() method
const blogWhere = await Blog.where("author").equals("Jesse Hall");
console.log(blogWhere)
```
Either of these methods work. Use whichever seems more natural to you.
You can also chain multiple `where()` methods to include even the most complicated query.
### `select()`
To include projection when using the `where()` method, chain the `select()` method after your query.
``` js
const blog = await Blog.where("author").equals("Jesse Hall").select("title author")
console.log(blog)
```
## Multiple schemas
It's important to understand your options when modeling data.
If you’re coming from a relational database background, you’ll be used to having separate tables for all of your related data.
Generally, in MongoDB, data that is accessed together should be stored together.
You should plan this out ahead of time if possible. Nest data within the same schema when it makes sense.
If you have the need for separate schemas, Mongoose makes it a breeze.
Let’s create another schema so that we can see how multiple schemas can be used together.
We’ll create a new file, `User.js`, in the model folder.
``` js
import mongoose from 'mongoose';
const {Schema, model} = mongoose;
const userSchema = new Schema({
name: {
type: String,
required: true,
},
email: {
type: String,
minLength: 10,
required: true,
lowercase: true
},
});
const User = model('User', userSchema);
export default User;
```
For the `email`, we are using a new property, `minLength`, to require a minimum character length for this string.
Now we’ll reference this new user model in our blog schema for the `author` and `comments.user`.
``` js
import mongoose from 'mongoose';
const { Schema, SchemaTypes, model } = mongoose;
const blogSchema = new Schema({
...,
author: {
type: SchemaTypes.ObjectId,
ref: 'User',
required: true,
},
...,
comments: [{
user: {
type: SchemaTypes.ObjectId,
ref: 'User',
required: true,
},
content: String,
votes: Number
}];
});
...
```
Here, we set the `author` and `comments.user` to `SchemaTypes.ObjectId` and added a `ref`, or reference, to the user model.
This will allow us to “join” our data a bit later.
And don’t forget to destructure `SchemaTypes` from `mongoose` at the top of the file.
Lastly, let’s update the `index.js` file. We’ll need to import our new user model, create a new user, and create a new article with the new user’s `_id`.
``` js
...
import User from './model/User.js';
...
const user = await User.create({
name: 'Jesse Hall',
email: 'jesse@email.com',
});
const article = await Blog.create({
title: 'Awesome Post!',
slug: 'Awesome-Post',
author: user._id,
content: 'This is the best post ever',
tags: ['featured', 'announcement'],
});
console.log(article);
```
Notice now that there is a `users` collection along with the `blogs` collection in the MongoDB database.
You’ll now see only the user `_id` in the author field. So, how do we get all of the info for the author along with the article?
We can use the `populate()` Mongoose method.
``` js
const article = await Blog.findOne({ title: "Awesome Post!" }).populate("author");
console.log(article);
```
Now the data for the `author` is populated, or “joined,” into the `article` data. Mongoose actually uses the MongoDB `$lookup` method behind the scenes.
## Middleware
In Mongoose, middleware are functions that run before and/or during the execution of asynchronous functions at the schema level.
Here’s an example. Let’s update the `updated` date every time an article is saved or updated. We’ll add this to our `Blog.js` model.
``` js
blogSchema.pre('save', function(next) {
this.updated = Date.now(); // update the date every time a blog post is saved
next();
});
```
Then in the `index.js` file, we’ll find an article, update the title, and then save it.
``` js
const article = await Blog.findById("6247589060c9b6abfa1ef530").exec();
article.title = "Updated Title";
await article.save();
console.log(article);
```
Notice that we now have an `updated` date!
Besides `pre()`, there is also a `post()` mongoose middleware function.
## Next steps
I think our example here could use another schema for the `comments`. Try creating that schema and testing it by adding a few users and comments.
There are many other great Mongoose helper methods that are not covered here. Be sure to check out the [official documentation for references and more examples.
## Conclusion
I think it’s great that developers have many options for connecting and manipulating data in MongoDB. Whether you prefer Mongoose or the standard MongoDB drivers, in the end, it’s all about the data and what’s best for your application and use case.
I can see why Mongoose appeals to many developers and I think I’ll use it more in the future. | md | {
"tags": [
"JavaScript",
"MongoDB"
],
"pageDescription": "In this article, we’ll learn how Mongoose, a library for MongoDB, can help you to structure and access your data with ease. Many who learn MongoDB get introduced to it through the very popular library, Mongoose. Mongoose is described as “elegant MongoDB object modeling for Node.js.\"",
"contentType": "Quickstart"
} | Getting Started with MongoDB & Mongoose | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/php/php-error-handling | created | # Handling MongoDB PHP Errors
Welcome to this article about MongoDB error handling in PHP. Code samples and tutorials abound on the web , but for clarity's sake, they often don't show what to do with potential errors. Our goal here is to show you common mechanisms to deal with potential issues like connection loss, temporary inability to read/write, initialization failures, and more.
This article was written using PHP 8.1 and MongoDB 6.1.1 (serverless) with the PHP Extension and Library 1.15. As things may change in the future, you can refer to our official MongoDB PHP documentation.
## Prerequisites
To execute the code sample created for this article, you will need:
* A MongoDB Atlas cluster with sample data loaded. We have MongoDB Atlas free tier clusters available to all.
* A web server with PHP and the MongoDB PHP driver installed. Ideally, follow our "Getting Set Up to Run PHP with MongoDB" guide.
* Alternatively, you can consider using PHP's built-in webserver, which can be simpler to set up and might avoid other web server environment variances.
* A functioning Composer to set up the MongoDB PHP Library.
* A code editor, like Visual Studio Code.
We will refer to the MongoDB PHP Driver, which has two distinct components. First, there's the MongoDB PHP Extension, which is the system-level interface to MongoDB.
Secondly, there's the MongoDB PHP Library, a PHP library that is the application's interface to MongoDB. You can learn about the people behind our PHP driver in this excellent podcast episode.
:youtube]{vid=qOuGM6dNDm8}
## Initializing our code sample
Clone it from the [Github repository to a local folder in the public section of your web server and website. You can use the command
```
git clone https://github.com/mongodb-developer/php-error-handling-sample
```
Go to the project's directory with the command
```
cd php-error-handling-sample
```
and run the command
```
composer install
```
Composer will download external libraries to the "vendor" directory (see the screenshot below). Note that Composer will check if the MongoDB PHP extension is installed, and will report an error if it is not.
Create an .env file containing your database user credentials in the same folder as index.php. Our previous tutorial describes how to do this in the "Securing Usernames and Passwords" section. The .env file is a simple text file formatted as follows:
***MDB\_USER=user name]
MDB\_PASS=[password]***
In your web browser, navigate to `website-url/php-error-handling-sample/`, and `index.php` will be executed.
Upon execution, our code sample outputs a page like this, and there are various ways to induce errors to see how the various checks work by commenting/uncommenting lines in the source code.
![
## System-level error handling
Initially, developers run into system-level issues related to the PHP configuration and whether or not the MongoDB PHP driver is properly installed. That's especially true when your code is deployed on servers you don't control. Here are two common system-level runtime errors and how to check for them:
1. Is the MongoDB extension installed and loaded?
2. Is the MongoDB PHP Library available to your code?
There are many ways to check if the MongoDB PHP extension is installed and loaded and here are two in the article, while the others are in the the code file.
1. You can call PHP's `extension_loaded()` function with `mongodb` as the argument. It will return true or false.
2. You can call `class_exists()` to check for the existence of the `MongoDB\Driver\Manager` class defined in the MongoDB PHP extension.
3. Call `phpversion('mongodb')`, which should return the MongoDB PHP extension version number on success and false on failure.
4. The MongoDB PHP Library also contains a detect-extension.php file which shows another way of detecting if the extension was loaded. This file is not part of the distribution but it is documented.
```
// MongoDB Extension check, Method #1
if ( extension_loaded('mongodb') ) {
echo(MSG_EXTENSION_LOADED_SUCCESS);
} else {
echo(MSG_EXTENSION_LOADED_FAIL);
}
// MongoDB Extension check, Method #2
if ( !class_exists('MongoDB\Driver\Manager') ) {
echo(MSG_EXTENSION_LOADED2_FAIL);
exit();
}
else {
echo(MSG_EXTENSION_LOADED2_SUCCESS);
}
```
Failure for either means the MongoDB PHP extension has not been loaded properly and you should check your php.ini configuration and error logs, as this is a system configuration issue. Our Getting Set Up to Run PHP with MongoDB article provides debugging steps and tips which may help you.
Once the MongoDB PHP extension is up and running, the next thing to do is to check if the MongoDB PHP Library is available to your code. You are not obligated to use the library, but we highly recommend you do. It keeps things more abstract, so you focus on your app instead of the inner-workings of MongoDB.
Look for the `MongoDB\Client` class. If it's there, the library has been added to your project and is available at runtime.
```
// MongoDB PHP Library check
if ( !class_exists('MongoDB\Client') ) {
echo(MSG_LIBRARY_MISSING);
exit();
}
else {
echo(MSG_LIBRARY_PRESENT);
}
```
## Database instance initialization
You can now instantiate a client with your connection string . (Here's how to find the Atlas connection string.)
The instantiation will fail if something is wrong with the connection string parsing or the driver cannot resolve the connection's SRV (DNS) record. Possible causes for SRV resolution failures include the IP address being rejected by the MongoDB cluster or network connection issues while checking the SRV.
```
// Fail if the MongoDB Extension is not configuired and loaded
// Fail if the connection URL is wrong
try {
// IMPORTANT: replace with YOUR server DNS name
$mdbserver = 'serverlessinstance0.owdak.mongodb.net';
$client = new MongoDB\Client('mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$mdbserver.'/?retryWrites=true&w=majority');
echo(MSG_CLIENT_SUCCESS);
// succeeds even if user/password is invalid
}
catch (\Exception $e) {
// Fails if the URL is malformed
// Fails without a SVR check
// fails if the IP is blocked by an ACL or firewall
echo(MSG_CLIENT_FAIL);
exit();
}
```
Up to this point, the library has just constructed an internal driver manager, and no I/O to the cluster has been performed. This behavior is described in this [PHP library documentation page — see the "Behavior" section.
It's important to know that even though the client was successfully instantiated, it does not mean your user/password pair is valid , and it doesn't automatically grant you access to anything . Your code has yet to try accessing any information, so your authentication has not been verified.
When you first create a MongoDB Atlas cluster, there's a "Connect" button in the GUI to retrieve the instance's URL. If no user database exists, you will be prompted to add one, and add an IP address to the access list.
In the MongoDB Atlas GUI sidebar, there's a "Security" section with links to the "Database Access" and "Network Access" configuration pages. "Database Access" is where you create database users and their privileges. "Network Access" lets you add IP addresses to the IP access list.
Next, you can do a first operation that requires an I/O connection and an authentication, such as listing the databases with `listDatabaseNames()`, as shown in the code block below. If it succeeds, your user/password pair is valid. If it fails, it could be that the pair is invalid or the user does not have the proper privileges.
```
try {
// if listDatabaseNames() works, your authorization is valid
$databases_list_iterator = $client->listDatabaseNames(); // asks for a list of database names on the cluster
$databases_list = iterator_to_array( $databases_list_iterator );
echo( MSG_CLIENT_AUTH_SUCCESS );
}
catch (\Exception $e) {
// Fail if incorrect user/password, or not authorized
// Could be another issue, check content of $e->getMessage()
echo( MSG_EXCEPTION. $e->getMessage() );
exit();
}
```
There are other reasons why any MongoDB command could fail (connectivity loss, etc.), and the exception message will reveal that. These first initialization steps are common points of friction as cluster URLs vary from project to project, IPs change, and passwords are reset.
## CRUD error handling
If you haven't performed CRUD operation with MongoDB before, we have a great tutorial entitled "Creating, Reading, Updating, and Deleting MongoDB Documents with PHP." Here, we'll look at the error handling mechanisms.
We will access one of the sample databases called "sample\_analytics ," and read/write into the "customers" collection. If you're unfamiliar with MongoDB's terminology, here's a quick overview of the MongoDB database and collections.
Sometimes, ensuring the connected cluster contains the expected database(s) and collection(s) might be a good idea. In our code sample, we can check as follows:
```
// check if our desired database is present in the cluster by looking up its name
$workingdbname = 'sample_analytics';
if ( in_array( $workingdbname, $databases_list ) ) {
echo( MSG_DATABASE_FOUND." '$workingdbname'
" );
}
else {
echo( MSG_DATABASE_NOT_FOUND." '$workingdbname'
" );
exit();
}
// check if your desired collection is present in the database
$workingCollectionname = 'customers';
$collections_list_itrerator = $client->$workingdbname->listCollections();
$foundCollection = false;
$collections_list_itrerator->rewind();
while( $collections_list_itrerator->valid() ) {
if ( $collections_list_itrerator->current()->getName() == $workingCollectionname ) {
$foundCollection = true;
echo( MSG_COLLECTION_FOUND." '$workingCollectionname'
" );
break;
}
$collections_list_itrerator->next();
}
if ( !$foundCollection ) {
echo( MSG_COLLECTION_NOT_FOUND." '$workingCollectionname'
" );
exit();
}
```
MongoDB CRUD operations have a multitude of legitimate reasons to encounter an exception. The general way of handling these errors is to put your operation in a try/catch block to avoid a fatal error.
If no exception is encountered, most operations return a document containing information about how the operation went.
For example, write operations return a document that contains a write concern "isAcknowledged" boolean and a WriteResult object. It has important feedback data, such as the number of matched and modified documents (among other things). Your app can check to ensure the operation performed as expected.
If an exception does happen, you can add further checks to see exactly what type of exception. For reference, look at the MongoDB exception class tree and keep in mind that you can get more information from the exception than just the message. The driver's ServerException class can also provide the exception error code, the source code line and the trace, and more!
For example, a common exception occurs when the application tries to insert a new document with an existing unique ID. This could happen for many reasons, including in high concurrency situations where multiple threads or clients might attempt to create identical records.
MongoDB maintains an array of tests for its PHP Library (see DocumentationExamplesTest.php on Github). It contains great code examples of various queries, with error handling. I highly recommend looking at it and using it as a reference since it will stay up to date with the latest driver and APIs.
## Conclusion
This article was intended to introduce MongoDB error handling in PHP by highlighting common pitfalls and frequently asked questions we answer. Understanding the various MongoDB error-handling mechanisms will make your application rock-solid, simplify your development workflow, and ultimately make you and your team more productive.
To learn more about using MongoDB in PHP, learn from our PHP Library tutorial, and I invite you to connect via the PHP section of our developer community forums.
## References
* MongoDB PHP Quickstart Source Code Repository
* MongoDB PHP Driver Documentation provides thorough documentation describing how to use PHP with your MongoDB cluster.
* MongoDB Query Document documentation details the full power available for querying MongoDB collections. | md | {
"tags": [
"PHP",
"MongoDB"
],
"pageDescription": "This article shows you common mechanisms to deal with potential PHP Errors and Exceptions triggered by connection loss, temporary inability to read/write, initialization failures, and more.\n",
"contentType": "Article"
} | Handling MongoDB PHP Errors | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/secure-data-access-views | created | # How to Secure MongoDB Data Access with Views
## Introduction
Sometimes, MongoDB collections contain sensitive information that require access control. Using the Role-Based Access Control (RBAC) provided by MongoDB, it's easy to restrict access to this collection.
But what if you want to share your collection to a wider audience without exposing sensitive data?
For example, it could be interesting to share your collections with the marketing team for analytics purposes without sharing personal identifiable information (PII) or data you prefer to keep private, like employee salaries.
It's possible to achieve this result with MongoDB views combined with the MongoDB RBAC, and this is what we are going to explore in this blog post.
## Prerequisites
You'll need either:
- A MongoDB cluster with authentication activated (which is somewhat recommended in production!).
- A MongoDB Atlas cluster.
I'll assume you already have an admin user on your cluster with full authorizations or at least a user that can create views, custom roles. and users. If you are in Atlas, you can create this user in the `Database Access` tab or use the MongoDB Shell, like this:
```bash
mongosh "mongodb://localhost/admin" --quiet --eval "db.createUser({'user': 'root', 'pwd': 'root', 'roles': 'root']});"
```
Then you can [connect with the command line provided in Atlas or like this, if you are not in Atlas:
```js
mongosh "mongodb://localhost" --quiet -u root -p root
```
## Creating a MongoDB collection with sensitive data
In this example, I'll pretend to have an `employees` collection with sensitive data:
```js
db.employees.insertMany(
{
_id: 1,
firstname: 'Scott',
lastname: 'Snyder',
age: 21,
ssn: '351-40-7153',
salary: 100000
},
{
_id: 2,
firstname: 'Patricia',
lastname: 'Hanna',
age: 57,
ssn: '426-57-8180',
salary: 95000
},
{
_id: 3,
firstname: 'Michelle',
lastname: 'Blair',
age: 61,
ssn: '399-04-0314',
salary: 71000
},
{
_id: 4,
firstname: 'Benjamin',
lastname: 'Roberts',
age: 46,
ssn: '712-13-9307',
salary: 60000
},
{
_id: 5,
firstname: 'Nicholas',
lastname: 'Parker',
age: 69,
ssn: '320-25-5610',
salary: 81000
}
]
)
```
## How to create a view in MongoDB to hide sensitive fields
Now I want to share this collection to a wider audience, but I don’t want to share the social security numbers and salaries.
To solve this issue, I can create a [view with a `$project` stage that only allows a set of selected fields.
```js
db.createView('employees_view', 'employees', {$project: {firstname: 1, lastname: 1, age: 1}}])
```
> Note that I'm not doing `{$project: {ssn: 0, salary: 0}}` because every field except these two would appear in the view.
It works today, but maybe tomorrow, I'll add a `credit_card` field in some documents. It would then appear instantly in the view.
Let's confirm that the view works:
```js
db.employees_view.find()
```
Results:
```js
[
{ _id: 1, firstname: 'Scott', lastname: 'Snyder', age: 21 },
{ _id: 2, firstname: 'Patricia', lastname: 'Hanna', age: 57 },
{ _id: 3, firstname: 'Michelle', lastname: 'Blair', age: 61 },
{ _id: 4, firstname: 'Benjamin', lastname: 'Roberts', age: 46 },
{ _id: 5, firstname: 'Nicholas', lastname: 'Parker', age: 69 }
]
```
Depending on your schema design and how you want to filter the fields, it could be easier to use [$unset instead of $project. You can learn more in the Practical MongoDB Aggregations Book. But again, `$unset` will just remove the specified fields without filtering new fields that could be added in the future.
## Managing data access with MongoDB roles and users
Now that we have our view, we can share this with restricted access rights. In MongoDB, we need to create a custom role to achieve this.
Here are the command lines if you are not in Atlas.
```js
use admin
db.createRole(
{
role: "view_access",
privileges:
{resource: {db: "test", collection: "employees_view"}, actions: ["find"]}
],
roles: []
}
)
```
Then we can create the user:
```js
use admin
db.createUser({user: 'view_user', pwd: '123', roles: ["view_access"]})
```
If you are in Atlas, database access is managed directly in the Atlas website in the `Database Access` tab. You can also use the Atlas CLI if you feel like it.
![Database access tab in Atlas
Then you need to create a custom role.
> Note: In Step 2, I only selected the _Collection Actions > Query and Write Actions > find_ option.
Now that your role is created, head back to the `Database Users` tab and create a user with this custom role.
## Testing data access control with restricted user account
Now that our user is created, we can confirm that this new restricted user doesn't have access to the underlying collection but has access to the view.
```js
$ mongosh "mongodb+srv://hidingfields.as3qc0s.mongodb.net/test" --apiVersion 1 --username view_user --quiet
Enter password: ***
Atlas atlas-odym8f-shard-0 primary] test> db.employees.find()
MongoServerError: user is not allowed to do action [find] on [test.employees]
Atlas atlas-odym8f-shard-0 [primary] test> db.employees_view.find()
[
{ _id: 1, firstname: 'Scott', lastname: 'Snyder', age: 21 },
{ _id: 2, firstname: 'Patricia', lastname: 'Hanna', age: 57 },
{ _id: 3, firstname: 'Michelle', lastname: 'Blair', age: 61 },
{ _id: 4, firstname: 'Benjamin', lastname: 'Roberts', age: 46 },
{ _id: 5, firstname: 'Nicholas', lastname: 'Parker', age: 69 }
]
```
## Wrap-up
In this blog post, you learned how to share your MongoDB collections to a wider audience — even the most critical ones — without exposing sensitive data.
Note that views can use the indexes from the source collection so your restricted user can leverage those for more advanced queries.
You could also choose to add an extra `$match` stage before your $project stage to filter entire documents from ever appearing in the view. You can see an example in the [Practical MongoDB Aggregations Book. And don't forget to support the `$match` with an index!
Questions? Comments? Let's continue the conversation over at the MongoDB Developer Community.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "In this blog post, you will learn how to share a MongoDB collection to a wider audience without exposing sensitive fields in your documents.",
"contentType": "Article"
} | How to Secure MongoDB Data Access with Views | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/build-totally-serverless-rest-api-mongodb-atlas | created | # Build a Totally Serverless REST API with MongoDB Atlas
So you want to build a REST API, but you don't want to worry about the management burden when it comes to scaling it to meet the demand of your users. Or maybe you know your API will experience more burst usage than constant demand and you'd like to reduce your infrastructure costs.
These are two great scenarios where a serverless architecture could benefit your API development. However, did you know that the serverless architecture doesn't stop at just the API level? You could make use of a serverless database in addition to the application layer and reap the benefits of going totally serverless.
In this tutorial, we'll see how to go totally serverless in our application and data development using a MongoDB Atlas serverless instance as well as Atlas HTTPS endpoints for our application.
## Prerequisites
You won't need much to be successful with this tutorial:
- A MongoDB Atlas account.
- A basic understanding of Node.js and JavaScript.
We'll see how to get started with MongoDB Atlas in this tutorial, but you'll need a basic understanding of JavaScript because we'll be using it to create our serverless API endpoints.
## Deploy and configure a MongoDB Atlas serverless instance
We're going to start this serverless journey with a serverless database deployment. Serverless instances provide an on-demand database endpoint for your application that will automatically scale up and down to zero with application demand and only charge you based on your usage. Due to the limited strain we put on our database in this tutorial, you'll have to use your imagination when it comes to scaling.
It’s worth noting that the serverless API that we create with the Atlas HTTPS endpoints can use a pre-provisioned database instance and is not limited to just serverless database instances. We’re using a serverless instance to maintain 100% serverless scalability from database to application.
From the MongoDB Atlas Dashboard, click the "Create" button.
You'll want to choose "Serverless" as the instance type followed by the cloud in which you'd like it to live. For this example, the cloud vendor isn't important, but if you have other applications that exist on one of the listed clouds, for latency reasons it would make sense to keep things consistent. You’ll notice that the configuration process is very minimal and you never need to think about provisioning any specified resources for your database.
When you click the "Create Instance" button, your instance is ready to go!
## Developing a REST API with MongoDB Atlas HTTPS endpoints
To create the endpoints for our API, we are going to leverage Atlas HTTPS endpoints.Think of these as a combination of Functions as a Service (FaaS) and an API gateway that routes URLs to a function. This service can be found in the "App Services" tab area within MongoDB Atlas.
Click on the "App Services" tab within MongoDB Atlas.
You'll need to create an application for this particular project. Choose the "Create a New App" button and select the serverless instance as the cluster that you wish to use.
There's a lot you can do with Atlas App Services beyond API creation in case you wanted to explore items out of the scope of this tutorial.
From the App Services dashboard, choose "HTTPS Endpoints."
We're going to create our first endpoint and it will be responsible for creating a new document.
When creating the new endpoint, use the following information:
- Route: /person
- Enabled: true
- HTTP Method: POST
- Respond with Result: true
- Return Type: JSON
- Function: New Function
The remaining fields can be left as their default values.
Give the new function a name. The name is not important, but it would make sense to call it something like "createPerson" for your own sanity.
The JavaScript for the function should look like the following:
```javascript
exports = function({ query, headers, body}, response) {
const result = context.services
.get("mongodb-atlas")
.db("examples")
.collection("people")
.insertOne(JSON.parse(body.text()));
return result;
};
```
Remember, our goal is to create a document.
In the above function, we are using the "examples" database and the "people" collection within our serverless instance. Neither need to exist prior to creating the function or executing our code. They will be created at runtime.
For this example, we are not doing any data validation. Whatever the client sends through a request body will be saved into MongoDB. Your actual function logic will likely vary to accommodate more of your business logic.
We're not in the clear yet. We need to change our authentication rules for the function. Click on the "Functions" navigation item and then choose the "Settings" tab. More complex authentication mechanisms are out of the scope of this particular tutorial, so we're going to give the function "System" level authentication. Consult the documentation to see what authentication mechanisms make the most sense for you.
We're going to create one more endpoint for this tutorial. We want to be able to retrieve any document from within our collection.
Create a new HTTPS endpoint. Use the following information:
- Route: /people
- Enabled: true
- HTTP Method: GET
- Respond with Result: true
- Return Type: JSON
- Function: New Function
Once again, the other fields can be left as the default. Choose a name like "retrievePeople" for your function, or whatever makes the most sense to you.
The function itself can be as simple as the following:
```javascript
exports = function({ query, headers, body}, response) {
const docs = context.services
.get("mongodb-atlas")
.db("examples")
.collection("people")
.find({})
.toArray();
return docs;
};
```
In the above example, we're using an empty filter to find and return all documents in our collection.
To make this work, don't forget to change the authentication on the "retrievePeople" function like you did the "createPerson" function. The "System" level works for this example, but once again, pick what makes the most sense for your production scenario.
## MongoDB Atlas App Services authentication, authorization, and general security
We brushed over it throughout the tutorial, but it’s worth further clarifying the levels of security available to you when developing a serverless REST API with MongoDB Atlas.
We can use all or some of the following to improve the security of our API:
- Authentication
- Authorization
- Network security with IP access lists
With a network rule, you can allow everyone on the internet to be able to reach your API or specific IP addresses. This can be useful if you are building a public API or something internal for your organization.
The network rules for your application should be your first line of defense.
Throughout this tutorial, we used “System” level authentication for our endpoints. This essentially allows anyone who can reach our API from a network level access to our API without question. If you want to improve the security beyond a network rule, you can change the authentication mechanism to something like “Application” or “User” instead.
MongoDB offers a variety of ways to authenticate users. For example, you could enable email and password authentication, OAuth, or something custom. This would require the user to authenticate and establish a token or session prior to interacting with your API.
Finally, you can take advantage of authorization rules within Atlas App Services. This can be valuable if you want to restrict users in what they can do with your API. These rules are created using special JSON expressions.
If you’re interested in learning the implementation specifics behind network level security, authentication, or authorization, take a look at the documentation.
## Conclusion
You just saw how to get started developing a truly serverless application with MongoDB Atlas. Not only was the API serverless through use of Atlas HTTPS endpoints, but it also made use of a serverless database instance.
When using this approach, your application will scale to meet demand without any developer intervention. You'll also be billed for usage rather than uptime, which could provide many advantages.
If you want to learn more, consider checking out the MongoDB Community Forums to see how other developers are integrating serverless.
| md | {
"tags": [
"Atlas",
"JavaScript",
"Serverless"
],
"pageDescription": "Learn how to go totally serverless in both the database and application by using MongoDB Atlas serverless instances and the MongoDB Atlas App Services.",
"contentType": "Tutorial"
} | Build a Totally Serverless REST API with MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-cluster-automation-using-scheduled-triggers | created | # Atlas Cluster Automation Using Scheduled Triggers
# Atlas Cluster Automation Using Scheduled Triggers
Every action you can take in the Atlas user interface is backed by a corresponding Administration API, which allows you to easily bring automation to your Atlas deployments. Some of the more common forms of Atlas automation occur on a schedule, such as pausing a cluster that’s only used for testing in the evening and resuming the cluster again in the morning.
Having an API to automate Atlas actions is great, but you’re still on the hook for writing the script that calls the API, finding a place to host the script, and setting up the job to call the script on your desired schedule. This is where Atlas Scheduled Triggers come to the rescue.
In this Atlas Cluster Automation Using Scheduled Triggers article I will show you how a Scheduled Trigger can be used to easily incorporate automation into your environment. In addition to pausing and unpausing a cluster, I’ll similarly show how cluster scale up and down events could also be placed on a schedule. Both of these activities allow you to save on costs for when you either don’t need the cluster (paused), or don’t need it to support peak workloads (scale down).
# Architecture
Three example scheduled triggers are provided in this solution. Each trigger has an associated trigger function. The bulk of the work is handled by the **modifyCluster** function, which as the name implies is a generic function for making modifications to a cluster. It's a wrapper around the Atlas Update Configuration of One Cluster Admin API.
# Preparation
## Generate an API Key
In order to call the Atlas Administrative APIs, you'll first need an API Key with the Organization Owner role. API Keys are created in the Access Manager. At the Organization level (not the Project level), select **Access Manager** from the menu on the left:
Then select the **API Keys** tab.
Create a new key, giving it a good description. Assign the key **Organization Owner** permissions, which will allow it to manage any of the projects in the organization.
Click **Next** and make a note of your Private Key:
Let's limit who can use our API key by adding an access list. In our case, the API key is going to be used by a Trigger which is a component of Atlas App Services. You will find the list of IP addresses used by App Services in the documentation under Firewall Configuration. Note, each IP address must be added individually. Here's an idea you can vote for to get this addressed: Ability to provide IP addresses as a list for Network Access
Click **Done.**
# Deployment
## Create a Project for Automation
Since this solution works across your entire Atlas organization, I like to host it in its own dedicated Atlas Project.
## Create an Application
We will host our trigger in an Atlas App Services Application. To begin, just click the App Services tab:
You'll see that App Services offers a bunch of templates to get you started. For this use case, just select the first option to **Build your own App**:
You'll then be presented with options to link a data source, name your application and choose a deployment model. The current iteration of this utility doesn't use a data source, so you can ignore that step (a free cluster for you regardless). You can also leave the deployment model at its default (Global), unless you want to limit the application to a specific region.
I've named the application **Automation App**:
Click **Create App Service**. If you're presented with a set of guides, click **Close Guides** as today I am your guide.
From here, you have the option to simply import the App Services application and adjust any of the functions to fit your needs. If you prefer to build the application from scratch, skip to the next section.
# Import Option
## Step 1: Store the API Secret Key
The extract has a dependency on the API Secret Key, thus the import will fail if it is not configured beforehand.
Use the **Values** menu on the left to Create a Secret named **AtlasPrivateKeySecret** containing your private key (the secret is not in quotes):
## Step 2: Install the App Services CLI
The App Services CLI is available on npm. To install the App Services CLI on your system, ensure that you have Node.js installed and then run the following command in your shell:
```zsh
✗ npm install -g atlas-app-services-cli
```
## Step 3: Extract the Application Archive
Download and extract the **AutomationApp.zip**.
## Step 4: Log into Atlas
To configure your app with App Services CLI, you must log in to Atlas using your API keys:
```zsh
✗ appservices login --api-key="" --private-api-key=""
Successfully logged in
```
## Step 5: Get the Application ID
Select the **App Settings** menu and copy your Application ID:
## Step 6: Import the Application
Run the following appservices push command from the directory where you extracted the export:
```zsh
appservices push --remote=""
...
A summary of changes
...
? Please confirm the changes shown above Yes
Creating draft
Pushing changes
Deploying draft
Deployment complete
Successfully pushed app up:
```
After the import, replace the `AtlasPublicKey` with your API public key value.
## Review the Imported Application
The imported application includes 3 self-explanatory sample scheduled triggers:
The 3 triggers have 3 associated Functions. The **pauseClustersTrigger** and **resumeClustersTrigger** function supply a set of projects and clusters to pause, so these need to be adjusted to fit your needs:
```JavaScript
// Supply projectIDs and clusterNames...
const projectIDs =
{
id: '5c5db514c56c983b7e4a8701',
names: [
'Demo',
'Demo2'
]
},
{
id: '62d05595f08bd53924fa3634',
names: [
'ShardedMultiRegion'
]
}
];
```
All 3 trigger functions call the **modifyCluster** function, where the bulk of the work is done.
In addition, you'll find two utility functions, **getProjectClusters** and **getProjects**. These functions are not utilized in this solution, but are provided for reference if you wanted to further automate these processes (that is, removing the hard coded project IDs and cluster names in the trigger functions):
![Functions
Now that you have reviewed the draft, as a final step go ahead and deploy the App Services application.
# Build it Yourself Option
To understand what's included in the application, here are the steps to build it yourself from scratch.
## Step 1: Store the API Keys
The functions we need to create will call the Atlas Administration APIs, so we need to store our API Public and Private Keys, which we will do using Values & Secrets. The sample code I provide references these values as AtlasPublicKey and AtlasPrivateKey, so use those same names unless you want to change the code where they’re referenced.
You'll find **Values** under the BUILD menu:
##
First, create a Value for your public key (_note, the key is in quotes_):
Create a Secret containing your private key (the secret is not in quotes):
The Secret cannot be accessed directly, so create a second Value that links to the secret:
## Step 2: Note the Project ID(s)
We need to note the IDs of the projects that have clusters we want to automate. Click the 3 dots in the upper left corner of the UI to open the Project Settings:
Under which you’ll find your Project ID:
## Step 3: Create the Functions
I will create two functions, a generic function to modify a cluster and a trigger function to iterate over the clusters to be paused.
You'll find Functions under the BUILD menu:
## modifyCluster
I’m only demonstrating a couple of things you can do with cluster automation, but the sky is really limitless. The following modifyCluster function is a generic wrapper around the Modify One Multi-Cloud Cluster from One Project API for calling the API from App Services (or Node.js for that matter).
Create a New Function named **modifyCluster**. Set the function to Private as it will only be called by our trigger. The other default settings are fine:
Switch to the Function Editor tab and paste the following code:
```JavaScript
/*
* Modifies the cluster as defined by the body parameter.
* See https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Clusters/operation/updateCluster
*
*/
exports = async function(username, password, projectID, clusterName, body) {
// Easy testing from the console
if (username == "Hello world!") {
username = await context.values.get("AtlasPublicKey");
password = await context.values.get("AtlasPrivateKey");
projectID = "5c5db514c56c983b7e4a8701";
clusterName = "Demo";
body = {paused: false}
}
const arg = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: 'api/atlas/v2/groups/' + projectID + '/clusters/' + clusterName,
username: username,
password: password,
headers: {'Accept': 'application/vnd.atlas.2023-11-15+json'], 'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
digestAuth:true,
body: JSON.stringify(body)
};
// The response body is a BSON.Binary object. Parse it and return.
response = await context.http.patch(arg);
return EJSON.parse(response.body.text());
};
```
To test this function, you need to supply an API key, an API secret, a project Id, an associated cluster name to modify, and a payload containing the modifications you'd like to make. In our case it's simply setting the paused property.
> **Note**: By default, the Console supplies 'Hello world!' when test running a function, so my function code tests for that input and provides some default values for easy testing.
![Console
```JavaScript
// Easy testing from the console
if (username == "Hello world!") {
username = await context.values.get("AtlasPublicKey");
password = await context.values.get("AtlasPrivateKey");
projectID = "5c5db514c56c983b7e4a8701";
clusterName = "Demo";
body = {paused: false}
}
```
Press the **Run** button to see the results, which will appear in the Result window:
And you should find you cluster being resumed (or paused):
## pauseClustersTrigger
This function will be called by a trigger. As it's not possible to pass parameters to a scheduled trigger, it uses a hard-coded list of project Ids and associated cluster names to pause. Ideally these values would be stored in a collection with a nice UI to manage all of this, but that's a job for another day :-).
_In the appendix of this article, I provide functions that will get all projects and clusters in the organization. That would create a truly dynamic operation that would pause all clusters. You could then alternatively refactor the code to use an exclude list instead of an allow list._
```JavaScript
/*
* Iterates over the provided projects and clusters, pausing those clusters
*/
exports = async function() {
// Supply projectIDs and clusterNames...
const projectIDs = {id:'5c5db514c56c983b7e4a8701', names:['Demo', 'Demo2']}, {id:'62d05595f08bd53924fa3634', names:['ShardedMultiRegion']}];
// Get stored credentials...
const username = context.values.get("AtlasPublicKey");
const password = context.values.get("AtlasPrivateKey");
// Set desired state...
const body = {paused: true};
var result = "";
projectIDs.forEach(async function (project) {
project.names.forEach(async function (cluster) {
result = await context.functions.execute('modifyCluster', username, password, project.id, cluster, body);
console.log("Cluster " + cluster + ": " + EJSON.stringify(result));
});
});
return "Clusters Paused";
};
```
## Step 4: Create Trigger - pauseClusters
The ability to pause and resume a cluster is supported by the [Modify One Cluster from One Project API. To begin, select Triggers from the menu on the left:
And add a Trigger.
Set the Trigger Type to **Scheduled** and the name to **pauseClusters**:
As for the schedule, you have the full power of CRON Expressions at your fingertips. For this exercise, let’s assume we want to pause the cluster every evening at 6pm. Select **Advanced** and set the CRON schedule to `0 22 * * *`.
> **Note**, the time is in GMT, so adjust accordingly for your timezone. As this cluster is running in US East, I’m going to add 4 hours:
Check the Next Events window to validate the job will run when you desire.
The final step is to select the function for the trigger to execute. Select the **pauseClustersTrigger** function.
And **Save** the trigger.
The final step is to **REVIEW DRAFT & DEPLOY**.
# Resume the Cluster
You could opt to manually resume the cluster(s) as it’s needed. But for completeness, let’s assume we want the cluster(s) to automatically resume at 8am US East every weekday morning.
Duplicate the pauseClustersTrigger function to a new function named **resumeClustersTriggger**
At a minimum, edit the function code setting **paused** to **false**. You could also adjust the projectIDs and clusterNames to a subset of projects to resume:
```JavaScript
/*
* Iterates over the provided projects and clusters, resuming those clusters
*/
exports = async function() {
// Supply projectIDs and clusterNames...
const projectIDs = {id:'5c5db514c56c983b7e4a8701', names:['Demo', 'Demo2']}, {id:'62d05595f08bd53924fa3634', names:['ShardedMultiRegion']}];
// Get stored credentials...
const username = context.values.get("AtlasPublicKey");
const password = context.values.get("AtlasPrivateKey");
// Set desired state...
const body = {paused: false};
var result = "";
projectIDs.forEach(async function (project) {
project.names.forEach(async function (cluster) {
result = await context.functions.execute('modifyCluster', username, password, project.id, cluster, body);
console.log("Cluster " + cluster + ": " + EJSON.stringify(result));
});
});
return "Clusters Paused";
};
```
Then add a new scheduled trigger named **resumeClusters**. Set the CRON schedule to: `0 12 * * 1-5`. The Next Events validates for us this is exactly what we want:
![Schedule Type Resume
# Create Trigger: Scaling Up and Down
It’s not uncommon to have workloads that are more demanding during certain hours of the day or days of the week. Rather than running your cluster to support peak capacity, you can use this same approach to schedule your cluster to scale up and down as your workload requires it.
> **_NOTE:_** Atlas Clusters already support Auto-Scaling, which may very well suit your needs. The approach described here will let you definitively control when your cluster scales up and down.
Let’s say we want to scale up our cluster every day at 9am before our store opens for business.
Add a new function named **scaleClusterUpTrigger**. Here’s the function code. It’s very similar to before, except the body’s been changed to alter the provider settings:
> **_NOTE:_** This example represents a single-region topology. If you have multiple regions and/or asymetric clusters using read-only and/or analytics nodes, just check the Modify One Cluster from One Project API documenation for the payload details.
```JavaScript
exports = async function() {
// Supply projectID and clusterNames...
const projectID = '';
const clusterName = '';
// Get stored credentials...
const username = context.values.get("AtlasPublicKey");
const password = context.values.get("AtlasPrivateKey");
// Set the desired instance size...
const body = {
"replicationSpecs": [
{
"regionConfigs": [
{
"electableSpecs": {
"instanceSize": "M10",
"nodeCount":3
},
"priority":7,
"providerName": "AZURE",
"regionName": "US_EAST_2",
},
]
}
]
};
result = await context.functions.execute('modifyCluster', username, password, projectID, clusterName, body);
console.log(EJSON.stringify(result));
if (result.error) {
return result;
}
return clusterName + " scaled up";
};
```
Then add a scheduled trigger named **scaleClusterUp**. Set the CRON schedule to: `0 13 * * *`.
Scaling a cluster back down would simply be another trigger, scheduled to run when you want, using the same code above, setting the **instanceSizeName** to whatever you desire.
And that’s it. I hope you find this beneficial. You should be able to use the techniques described here to easily call any MongoDB Atlas Admin API endpoint from Atlas App Services.
# Appendix
## getProjects
This standalone function can be test run from the App Services console to see the list of all the projects in your organization. You could also call it from other functions to get a list of projects:
```JavaScript
/*
* Returns an array of the projects in the organization
* See https://docs.atlas.mongodb.com/reference/api/project-get-all/
*
* Returns an array of objects, e.g.
*
* {
* "clusterCount": {
* "$numberInt": "1"
* },
* "created": "2021-05-11T18:24:48Z",
* "id": "609acbef1b76b53fcd37c8e1",
* "links": [
* {
* "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/609acbef1b76b53fcd37c8e1",
* "rel": "self"
* }
* ],
* "name": "mg-training-sample",
* "orgId": "5b4e2d803b34b965050f1835"
* }
*
*/
exports = async function() {
// Get stored credentials...
const username = await context.values.get("AtlasPublicKey");
const password = await context.values.get("AtlasPrivateKey");
const arg = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: 'api/atlas/v1.0/groups',
username: username,
password: password,
headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
digestAuth:true,
};
// The response body is a BSON.Binary object. Parse it and return.
response = await context.http.get(arg);
return EJSON.parse(response.body.text()).results;
};
```
## getProjectClusters
Another example function that will return the cluster details for a provided project.
> Note, to test this function, you need to supply a projectId. By default, the Console supplies ‘Hello world!’, so I test for that input and provide some default values for easy testing.
```JavaScript
/*
* Returns an array of the clusters for the supplied project ID.
* See https://docs.atlas.mongodb.com/reference/api/clusters-get-all/
*
* Returns an array of objects. See the API documentation for details.
*
*/
exports = async function(project_id) {
if (project_id == "Hello world!") { // Easy testing from the console
project_id = "5e8f8268d896f55ac04969a1"
}
// Get stored credentials...
const username = await context.values.get("AtlasPublicKey");
const password = await context.values.get("AtlasPrivateKey");
const arg = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: `api/atlas/v1.0/groups/${project_id}/clusters`,
username: username,
password: password,
headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
digestAuth:true,
};
// The response body is a BSON.Binary object. Parse it and return.
response = await context.http.get(arg);
return EJSON.parse(response.body.text()).results;
};
``` | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "In this article I will show you how a Scheduled Trigger can be used to easily incorporate automation into your environment. In addition to pausing and unpausing a cluster, I’ll similarly show how cluster scale up and down events could also be placed on a schedule. Both of these activities allow you to save on costs for when you either don’t need the cluster (paused), or don’t need it to support peak workloads (scale down).",
"contentType": "Tutorial"
} | Atlas Cluster Automation Using Scheduled Triggers | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/sharding-optimization-defragmentation | created | # Optimizing Sharded Collections in MongoDB with Defragmentation
## Table of Contents
* Introduction
* Background
* What is sharded collection fragmentation?
* What is sharded collection defragmentation?
* When should I defragment my sharded collection?
* Defragmentation process overview
* How do I defragment my sharded collection?
* How to monitor the defragmentation process
* How to stop defragmentation
* Collection defragmentation example
* FAQs
## Introduction
So, what do you do if you have a large number of chunks in your sharded cluster and want to reduce the impact of chunk migrations on CRUD latency? You can use collection defragmentation!
In this post, we’ll cover when you should consider defragmenting a collection, the benefits of defragmentation for your sharded cluster, and cover all of the commands needed to execute, monitor, and stop defragmentation. If you are new to sharding or want a refresher on how MongoDB delivers horizontal scalability, please check out the MongoDB manual.
## Background
A sharded collection is stored as “chunks,” and a balancer moves data around to maintain an equal distribution of data between shards. In MongoDB 6.0, when the difference in the amount of data between two shards is two times the configured chunk size, the MongoDB balancer automatically migrates chunks between shards. For collections with a chunk size of 128MB, we will migrate data between shards if the difference in data size exceeds 256MB.
Every time it migrates a chunk, MongoDB needs to update the new location of this chunk in its routing table. The routing table stores the location of all the chunks contained in your collection. The more chunks in your collection, the more "locations" in the routing table, and the larger the routing table will be. The larger the routing table, the longer it takes to update it after each migration. When updating the routing table, MongoDB blocks writes to your collection. As a result, it’s important to keep the number of chunks for your collection to a minimum.
By merging as many chunks as possible via defragmentation, you reduce the size of the routing table by reducing the number of chunks in your collection. The smaller the routing table, the shorter the duration of write blocking on your collection for chunk migrations, merges, and splits.
## What is sharded collection fragmentation?
A collection with an excessive number of chunks is considered fragmented.
In this example, a customer’s collection has ~615K chunks on each shard.
## What is sharded collection defragmentation?
Defragmentation is the concept of merging contiguous chunks in order to reduce the number of chunks in your collection.
In our same example, after defragmentation on December 5th, the number of chunks has gone down to 650 chunks on each shard. The customer has managed to reduce the number of chunks in their cluster by a factor of 1000.
## When should I defragment my sharded collection?
Defragmentation of a collection should be considered in the following cases:
* A sharded collection contains more than 20,000 chunks.
* Once chunk migrations are complete after adding and removing shards.
## The defragmentation process overview
The process is composed of three distinct phases that all help reduce the number of chunks in your chosen collection. The first phase automatically merges mergeable chunks on the same shard. The second phase migrates smaller chunks to other shards so they can be merged. The third phase scans the cluster one final time and merges any remaining mergeable chunks that reside on the same shard.
The defragment operation will respect your balancing window and any configured zones.
**Note**: Do not modify the chunkSize value while defragmentation is executing as this may lead to improper behavior.
### Phase one: merge and measure
In phase one of the defragmentation process, MongoDB scans every shard in the cluster and merges any mergeable chunks that reside on the same shard. The data size of the resulting chunks is stored for the next phase of the defragmentation process.
### Phase two: move and merge
After phase one is completed, there might be some small chunks leftover. Chunks that are less than 25% of the max chunk size set are identified as small chunks. For example, with MongoDB’s default chunk size of 128MB, all chunks of 32MB or less would be considered small. The balancer then attempts to find other chunks across every shard to determine if they can be merged. If two chunks can be merged, the smaller of the two is moved to be merged with the second chunk. This also means that the larger your configured chunk size, the more “small” chunks you can move around, and the more you can defragment.
### Phase three: final merge
In this phase, the balancer scans the entire cluster to find any other mergeable chunks that reside on the same shard and merges them. The defragmentation process is now complete.
## How do I defragment my sharded collection?
If you have a highly fragmented collection, you can defragment it by issuing a command to initiate defragmentation via configureCollectionBalancing options.
```
db.adminCommand(
{
configureCollectionBalancing: ".",
defragmentCollection: true
}
)
```
## How to monitor the defragmentation process
Throughout the process, you can monitor the status of defragmentation by executing balancerCollectionStatus. Please refer to our balancerCollectionStatus manual for a detailed example on the output of the balancerCollectionStatus command during defragmentation.
## How to stop defragmentation
Defragmenting a collection can be safely stopped at any time during any phase by issuing a command to stop defragmentation via configureCollectionBalancing options.
```
db.adminCommand(
{
configureCollectionBalancing: ".",
defragmentCollection: false
}
)
```
## Collection defragmentation example
Let’s defragment a collection called `"airplanes"` in the `"vehicles"` database, with the current default chunk size of 128MB.
```
db.adminCommand(
{
configureCollectionBalancing: "vehicles.airplanes",
defragmentCollection: true
})
```
This will start the defragmentation process. You can monitor the process by using the balancerCollectionStatus command. Here’s an example of the output in each phase of the process.
### Phase one: merge and measure
```
{
"balancerCompliant": false,
"firstComplianceViolation": "defragmentingChunks",
"details": {
"currentPhase": "mergeAndMeasureChunks",
"progress": { "remainingChunksToProcess": 1 }
}
}
```
Since this phase of the defragmentation process contains multiple operations such as `mergeChunks` and `dataSize`, the value of the `remainingChunksToProcess` field will not change when the `mergeChunk` operation has been completed on a chunk but the dataSize operation is not complete for the same chunk.
### Phase two: move and merge
```
{
"balancerCompliant": false,
"firstComplianceViolation": "defragmentingChunks",
"details": {
"currentPhase": "moveAndMergeChunks",
"progress": { "remainingChunksToProcess": 1 }
}
}
```
Since this phase of the defragmentation process contains multiple operations, the value of the `remainingChunksToProcess` field will not change when a migration is complete but the `mergeChunk` operation is not complete for the same chunk.
### Phase three: final merge
```
{
"balancerCompliant": false,
"firstComplianceViolation": "defragmentingChunks",
"details": {
"currentPhase": "mergeChunks",
"progress": { "remainingChunksToProcess": 1 }
}
}
```
When the process is complete, for a balanced collection the document returns the following information.
```
{
"balancerCompliant" : true,
"ok" : 1,
"operationTime" : Timestamp(1583193238, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1583193238, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
```
**Note**: There is a possibility that your collection is not balanced at the end of defragmentation. The balancer will then kick in and start migrating data as it does regularly.
## FAQs
* **How long does defragmentation take?**
* The duration for defragmentation will vary depending on the size and the “fragmentation state” of a collection, with larger and more fragmented collections taking longer.
* The first phase of defragmentation merges chunks on the same shard delivering immediate benefits to your cluster. Here are some worst-case estimates for the time to complete phase one of defragmentation:
* Collection with 100,000 chunks - < 18 hrs
* Collection with 1,000,000 chunks - < 6 days
* The complete defragmentation process involves the movement of chunks between shards where speeds can vary based on the resources available and the cluster’s configured chunk size. It is difficult to estimate how long it will take for your cluster to complete defragmentation.
* **Can I use defragmentation to just change my chunk size?**
* Yes, just run the command with `"defragmentCollection: false"`.
* **How do I stop an ongoing defragmentation?**
* Run the following command:
```
db.adminCommand(
{
configureCollectionBalancing: ".",
defragmentCollection: false
}
)
```
* **Can I change my chunk size during defragmentation?**
* Yes, but this will result in a less than optimal defragmentation since the new chunk size will only be applied to any future phases of the operation.
* Alternatively, you can stop an ongoing defragmentation by running the command again with `"defragmentCollection: false"`. Then just run the command with the new chunk size and `"defragmentCollection: true"`.
* **What happens if I run defragmentation with a different chunk size on a collection where defragmentation is already in progress?**
* Do not run defragmentation with a different chunk size on a collection that is being defragmented as this causes the defragmentation process to utilize the new value in the next phase of the defragmentation process, resulting in a less than optimal defragmentation.
* **Can I run defragmentation on multiple collections simultaneously?**
* Yes. However, a shard can only participate in one migration at a time — meaning during the second phase of defragmentation, a shard can only donate or receive one chunk at a time.
* **Can I defragment collections to different chunk sizes?**
* Yes, chunk size is specific to a collection. So different collections can be configured to have different chunk sizes, if desired.
* **Why do I see a 1TB chunk on my shards even though I set chunkSize to 256MB?**
* In MongoDB 6.0, the cluster will no longer partition data unless it’s necessary to facilitate a migration. So, chunks may exceed the configured `chunkSize`. This behavior reduces the number of chunks on a shard which in turn reduces the impact of migrations on a cluster.
* **Is the value “true” for the key defragmentCollection of configureCollectionBalancing persistent once set?**
* The `defragmentCollection` key will only have a value of `"true"` while the defragmentation process is occurring. Once the defragmentation process ends, the value for defragmentCollection field will be unset from true.
* **How do I know if defragmentation is running currently, stopped, or started successfully?**
* Use the balancerCollectionStatus command to determine the current state of defragmentation on a given collection.
* In the document returned by the `balancerCollectionStatus` command, the firstComplianceViolation field will display `“defragmentingChunks”` when a collection is actively being defragmented.
* When a collection is not being defragmented, the balancer status returns a different value for “firstComplianceViolation”.
* If the collection is unbalanced, the command will return `“balancerCompliant: false”` and `“firstComplianceViolation`: `“chunksImbalance””`.
* If the collection is balanced, the command will return `“balancerCompliant: true”`. See balancerCollectionStatus for more information on the other possible values.
* **How does defragmentation impact my workload?**
* The impact of defragmentation on a cluster is similar to a migration. Writes will be blocked to the collection being defragmented while the metadata refreshes occur in response to the underlying merge and move defragmentation operations. The duration of the write blockage can be estimated by reviewing the mongod logs of a previous donor shard.
* Secondary reads will be affected during defragmentation operations as the changes on the primary node need to be replicated to the secondaries.
* Additionally, normal balancing operations will not occur for a collection being defragmented.
* **What if I have a balancing window?**
* The defragmentation process respects balancing windows and will not execute any defragmentation operations outside of the configured balancing window.
* **Is defragmentation resilient to crashes or stepdowns?**
* Yes, the defragmentation process can withstand a crash or a primary step down. Defragmentation will automatically restart after the completion of the step up of the new primary.
* **Is there a way to just do Phase One of defragmentation?**
* You can’t currently, but we may be adding this capability in the near future.
* **What if I’m still not happy with the number of chunks in my cluster?**
* Consider setting your chunk size to 1GB (1024MB) for defragmentation in order to move more mergeable chunks.
```
db.adminCommand(
{
configureCollectionBalancing: ".",
chunkSize: 1024,
defragmentCollection: true
}
)
```
* **How do I find my cluster’s configured chunk size?**
* You can check it in the `“config”` database.
```
use config
db.settings.find()
```
**Note**: If the command above returns Null, that means the cluster’s default chunk size has not be overridden and the default chunk size of 128MB is currently in use.
* **How do I find a specific collection’s chunk size?**
```
use
db.adminCommand(
{
balancerCollectionStatus: "."
}
)
```
* **How do I find a specific collection’s number of chunks?**
```
use
db.collection_name.getShardDistribution()
``` | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to optimize your MongoDB sharded cluster with defragmentation.",
"contentType": "Article"
} | Optimizing Sharded Collections in MongoDB with Defragmentation | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/kotlin/spring-boot3-kotlin-mongodb | created | # Getting Started with Backend Development in Kotlin Using Spring Boot 3 & MongoDB
> This is an introduction article on how to build a RESTful application in Kotlin using Spring Boot 3 and MongoDB Atlas.
## Introduction
Today, we are going to build a basic RESTful application that does a little more than a CRUD operation, and for that, we will use:
* `Spring Boot 3`, which is one of the popular frameworks based on Spring, allowing developers to build production grades quickly.
* `MongoDB`, which is a document oriented database, allowing developers to focus on building apps rather than on database schema.
## Prerequisites
This is a getting-started article, so nothing much is needed as a prerequisite. But familiarity with Kotlin as a programming language, plus a basic understanding of Rest API and HTTP methods, would be helpful.
To help with development activities, we will be using Jetbrains IntelliJ IDEA (Community Edition).
## HelloWorld app!
Building a HelloWorld app in any programming language/technology, I believe, is the quickest and easiest way to get familiar with it. This helps you cover the basic concepts, like how to build, run, debug, deploy, etc.
Since we are using the community version of IDEA, we cannot create the `HelloWorld` project directly from IDE itself using the New Project. But we can use the Spring initializer app instead, which allows us to create a Spring
project out of the box.
Once you are on the website, you can update the default selected parameters for the project, like the name of the project, language, version of `Spring Boot`, etc., to something similar as shown below.
And since we want to create REST API with MongoDB as a database, let's add the dependency using the Add Dependency button on the right.
After all the updates, our project settings will look like this.
Now we can download the project folder using the generate button and open it using the IDE. If we scan the project folder, we will only find one class — i.e., `HelloBackendWorldApplication.kt`, which has the `main` function, as well.
The next step is to print HelloWorld on the screen. Since we are building a restful
application, we will create a `GET` request API. So, let's add a function to act as a `GET` API call.
```kotlin
@GetMapping("/hello")
fun hello(@RequestParam(value = "name", defaultValue = "World") name: String?): String {
return String.format("Hello %s!", name)
}
```
We also need to add an annotation of `@RestController` to our `class` to make it a `Restful` client.
```kotlin
@SpringBootApplication
@RestController
class HelloBackendWorldApplication {
@GetMapping("/hello")
fun hello(): String {
return "Hello World!"
}
}
fun main(args: Array) {
runApplication(*args)
}
```
Now, let's run our project using the run icon from the toolbar.
Now load https://localhost:8080/hello on the browser once the build is complete, and that will print Hello World on your screen.
And on cross-validating this from Postman, we can clearly understand that our `Get` API is working perfectly.
It's time to understand the basics of `Spring Boot` that made it so easy to create our first API call.
## What is Spring Boot ?
> As per official docs, Spring Boot makes it easy to create stand-alone, production-grade, Spring-based applications that you can "just run."
This implies that it's a tool built on top of the Spring framework, allowing us to build web applications quickly.
`Spring Boot` uses annotations, which do the heavy lifting in the background. A few of them, we have used already, like:
1. `@SpringBootApplication`: This annotation is marked at class level, and declares to the code reader (developer) and Spring that it's a Spring Boot project. It allows an enabling feature, which can also be done using `@EnableAutoConfiguration`,`@ComponentScan`, and `@Configuration`.
2. `@RequestMapping` and `@RestController`: This annotation provides the routing information. Routing is nothing but a mapping of a `HTTP` request path (text after `host/`) to classes that have the implementation of these across various `HTTP` methods.
These annotations are sufficient for building a basic application. Using Spring Boot, we will create a RESTful web service with all business logic, but we don't have a data container that can store or provide data to run these operations.
## Introduction to MongoDB
For our app, we will be using MongoDB as the database. MongoDB is an open-source, cross-platform, and distributed document database, which allows building apps with flexible schema. This is great as we can focus on building the app rather than defining the schema.
We can get started with MongoDB really quickly using MongoDB Atlas, which is a database as a service in the cloud and has a free forever tier.
I recommend that you explore the MongoDB Jumpstart series to get familiar with MongoDB and its various services in under 10 minutes.
## Connecting with the Spring Boot app and MongoDB
With the basics of MongoDB covered, now let's connect our Spring Boot project to it. Connecting with MongoDB is really simple, thanks to the Spring Data MongoDB plugin.
To connect with MongoDB Atlas, we just need a database URL that can be added
as a `spring.data.mongodb.uri` property in `application.properties` file. The connection string can be found as shown below.
The format for the connection string is:
```shell
spring.data.mongodb.uri = mongodb + srv ://:@.mongodb.net/
```
## Creating a CRUD RESTful app
With all the basics covered, now let's build a more complex application than HelloWorld! In this app, we will be covering all CRUD operations and tweaking them along the way to make it a more realistic app. So, let's create a new project similar to the HelloWorld app we created earlier. And for this app, we will use one of the sample datasets provided by MongoDB — one of my favourite features that enables quick learning.
You can load a sample dataset on Atlas as shown below:
We will be using the `sample_restaurants` collection for our CRUD application. Before we start with the actual CRUD operation, let's create the restaurant model class equivalent to it in the collection.
```kotlin
@Document("restaurants")
data class Restaurant(
@Id
val id: ObjectId = ObjectId(),
val address: Address = Address(),
val borough: String = "",
val cuisine: String = "",
val grades: List = emptyList(),
val name: String = "",
@Field("restaurant_id")
val restaurantId: String = ""
)
data class Address(
val building: String = "",
val street: String = "",
val zipcode: String = "",
@Field("coord")
val coordinate: List = emptyList()
)
data class Grade(
val date: Date = Date(),
@Field("grade")
val rating: String = "",
val score: Int = 0
)
```
You will notice there is nothing fancy about this class except for the annotation. These annotations help us to connect or co-relate classes with databases like:
* `@Document`: This declares that this data class represents a document in Atlas.
* `@Field`: This is used to define an alias name for a property in the document, like `coord` for coordinate in `Address` model.
Now let's create a repository class where we can define all methods through which we can access data. `Spring Boot` has interface `MongoRepository`, which helps us with this.
```kotlin
interface Repo : MongoRepository {
fun findByRestaurantId(id: String): Restaurant?
}
```
After that, we create a controller through which we can call these queries. Since this is a bigger project, unlike the HelloWorld app, we will create a separate controller where the `MongoRepository` instance is passed using `@Autowired`, which provides annotations-driven dependency injection.
```kotlin
@RestController
@RequestMapping("/restaurants")
class Controller(@Autowired val repo: Repo) {
}
```
### Read operation
Now our project is ready to do some action, so let's count the number of restaurants in the collection using `GetMapping`.
```kotlin
@RestController
@RequestMapping("/restaurants")
class Controller(@Autowired val repo: Repo) {
@GetMapping
fun getCount(): Int {
return repo.findAll().count()
}
}
```
Taking a step further to read the restaurant-based `restaurantId`. We will have to add a method in our repo as `restaurantId` is not marked `@Id` in the restaurant class.
```kotlin
interface Repo : MongoRepository {
fun findByRestaurantId(restaurantId: String): Restaurant?
}
```
```kotlin
@GetMapping("/{id}")
fun getRestaurantById(@PathVariable("id") id: String): Restaurant? {
return repo.findByRestaurantId(id)
}
```
And again, we will be using Postman to validate the output against a random `restaurantId` from the sample dataset.
Let's also validate this against a non-existing `restaurantId`.
As expected, we haven't gotten any results, but the API response code is still 200, which is incorrect! So, let's fix this.
In order to have the correct response code, we will have to check the result before sending it back with the correct response code.
```kotlin
@GetMapping("/{id}")
fun getRestaurantById(@PathVariable("id") id: String): ResponseEntity {
val restaurant = repo.findByRestaurantId(id)
return if (restaurant != null) ResponseEntity.ok(restaurant) else ResponseEntity
.notFound().build()
}
```
### Write operation
To add a new object to the collection, we can add a `write` function in the `repo` we created earlier, or we can use the inbuilt method `insert` provided by `MongoRepository`. Since we will be adding a new object to the collection, we'll be using `@PostMapping` for this.
```kotlin
@PostMapping
fun postRestaurant(): Restaurant {
val restaurant = Restaurant().copy(name = "sample", restaurantId = "33332")
return repo.insert(restaurant)
}
```
### Update operation
Spring doesn't have any specific in-built update similar to other CRUD operations, so we will be using the read and write operation in combination to perform the update function.
```kotlin
@PatchMapping("/{id}")
fun updateRestaurant(@PathVariable("id") id: String): Restaurant? {
return repo.findByRestaurantId(restaurantId = id)?.let {
repo.save(it.copy(name = "Update"))
}
}
```
This is not an ideal way of updating items in the collection as it requires two operations and can be improved further if we use the MongoDB native driver, which allows us to perform complicated operations with the minimum number of steps.
### Delete operation
Deleting a restaurant is also similar. We can use the `MongoRepository` delete function of the item from the collection, which takes the item as input.
```kotlin
@DeleteMapping("/{id}")
fun deleteRestaurant(@PathVariable("id") id: String) {
repo.findByRestaurantId(id)?.let {
repo.delete(it)
}
}
```
## Summary
Thank you for reading and hopefully you find this article informative! The complete source code of the app can be found on GitHub.
If you have any queries or comments, you can share them on the MongoDB forum or tweet me @codeWithMohit. | md | {
"tags": [
"Kotlin",
"MongoDB",
"Spring"
],
"pageDescription": "This is an introductory article on how to build a RESTful application in Kotlin using Spring Boot 3 and MongoDB Atlas.",
"contentType": "Tutorial"
} | Getting Started with Backend Development in Kotlin Using Spring Boot 3 & MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-migrate-from-core-data-swiftui | created | # Migrating a SwiftUI iOS App from Core Data to Realm
Porting an app that's using Core Data to Realm is very simple. If you
have an app that already uses Core Data, and have been considering the
move to Realm, this step-by-step guide is for you! The way that your
code interacts with Core Data and Realm is very different depending on
whether your app is based on SwiftUI or UIKit—this guide assumes SwiftUI
(a UIKit version will come soon.)
You're far from the first developer to port your app from Core Data to
Realm, and we've been told many times that it can be done in a matter of
hours. Both databases handle your data as objects, so migration is
usually very straightforward: Simply take your existing Core Data code
and refactor it to use the Realm
SDK.
After migrating, you should be thrilled with the ease of use, speed, and
stability that Realm can bring to your apps. Add in MongoDB Realm
Sync and you can share the same
data between iOS, Android, desktop, and web apps.
>
>
>This article was updated in July 2021 to replace `objc` and `dynamic`
>with the `@Persisted` annotation that was introduced in Realm-Cocoa
>10.10.0.
>
>
## Prerequisites
This guide assumes that your app is written in Swift and built on
SwiftUI rather than UIKit.
## Steps to Migrate Your Code
### 1. Add the Realm Swift SDK to Your Project
To use Realm, you need to include Realm's Swift SDK
(Realm-Cocoa) in your Xcode
project. The simplest method is to use the Swift Package Manager.
In Xcode, select "File/Swift Packages/Add Package Dependency...". The
package URL is :
You can keep the default options and then select both the "Realm" and
"RealmSwift" packages.
### 2a. The Brutalist Approach—Remove the Core Data Framework
First things first. If your app is currently using Core Data, you'll
need to work out which parts of your codebase include Core Data code.
These will need to be refactored. Fortunately, there's a handy way to do
this. While you could manually perform searches on the codebase looking
for the relevant code, a much easier solution is to simply delete the
Core Data import statements at the top of your source files:
``` swift
import CoreData
```
Once this is done, every line of code implementing Core Data will throw
a compiler error, and then it's simply a matter of addressing each
compiler error, one at a time.
### 2b. The Incremental Approach—Leave the Core Data Framework Until Port Complete
Not everyone (including me) likes the idea of not being able to build a
project until every part of the port has been completed. If that's you,
I'd suggest this approach:
- Leave the old code there for now.
- Add a new model, adding `Realm` to the end of each class.
- Work through your views to move them over to your new model.
- Check and fix build breaks.
- Remove the `Realm` from your model names using the Xcode refactoring
feature.
- Check and fix build breaks.
- Find any files that still `import CoreData` and either remove that
line or the entire file if it's now obsolete.
- Check and fix build breaks.
- Migrate existing user data from Core Data to Realm if needed.
- Remove the original model code.
### 3. Remove Core Data Setup Code
In Core Data, changes to model objects are made against a managed object
context object. Managed object context objects are created against a
persistent store coordinator object, which themselves are created
against a managed object model object.
Suffice to say, before you can even begin to think about writing or
reading data with Core Data, you usually need to have code somewhere in
your app to set up these dependency objects and to expose Core Data's
functionality to your app's own logic. There will be a sizable chunk of
"setup" Core Data code lurking somewhere.
When you're switching to Realm, all of that code can go.
In Realm, all of the setting up is done on your behalf when you access a
Realm object for the first time, and while there are options to
configure it—such as where to place your Realm data file on disk—it's
all completely optional.
### 4. Migrate Your Model Files
Your Realm schema will be defined in code by defining your Realm Object
classes. There is no need for `.xcdatamodel` files when working with
Realm and so you can remove those Core Data files from your project.
In Core Data, the bread-and-butter class that causes subclassed model
objects to be persisted is `NSManagedObject`. The classes for these
kinds of objects are pretty much standard:
``` swift
import CoreData
@objc(ReminderList)
public class ReminderList: NSManagedObject {
@NSManaged public var title: String
@NSManaged public var reminders: Array
}
@objc(Reminder)
public class Reminder: NSManagedObject {
@NSManaged var title: String
@NSManaged var isCompleted: Bool
@NSManaged var notes: String?
@NSManaged var dueDate: Date?
@NSManaged var priority: Int16
@NSManaged var list: ReminderList
}
```
Converting these managed object subclasses to Realm is really simple:
``` swift
import RealmSwift
class ReminderList: Object, ObjectKeyIdentifiable {
@Persisted var title: String
@Persisted var reminders: List
}
class Reminder: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var title: String
@Persisted var isCompleted: Bool
@Persisted var notes: String?
@Persisted var dueDate: Date?
@Persisted var priority: Int16
}
```
Note that top-level objects inherit from `Object`, but objects that only
exist within higher-level objects inherit from `EmbeddedObject`.
### 5. Migrate Your Write Operations
Creating a new object in Core Data and then later modifying it is
relatively trivial, only taking a few lines of code.
Adding an object to Core Data must be done using a
`NSManagedObjectContext`. This context is available inside a SwiftUI
view through the environment:
``` swift
@Environment(\.managedObjectContext) var viewContext: NSManagedObjectContext
```
That context can then be used to save the object to Core Data:
``` swift
let reminder = Reminder(context: viewContext)
reminder.title = title
reminder.notes = notes
reminder.dueDate = date
reminder.priority = priority
do {
try viewContext.save()
} catch {
let nserror = error as NSError
fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
}
```
Realm requires that writes are made within a transaction, but the Realm
Swift SDK hides most of that complexity when you develop with SwiftUI.
The current Realm is made available through the SwiftUI environment and
the view can access objects in it using the `@ObserveredResults`
property wrapper:
``` swift
@ObservedResults(Reminder.self) var reminders
```
A new object can then be stored in the Realm:
``` swift
let reminder = Reminder()
reminder.title = title
reminder.notes = notes
reminder.dueDate = date
reminder.priority = priority
$reminders.append(reminder)
```
The Realm Swift SDK also hides the transactional complexity behind
making updates to objects already stored in Realm. The
`@ObservedRealmObject` property wrapper is used in the same way as
`@ObservedObject`—but for Realm managed objects:
``` swift
@ObservedRealmObject var reminder: Reminder
TextField("Notes", text: $reminder.notes)
```
To benefit from the transparent transaction functionality, make sure
that you use the `@ObservedRealmObject` property wrapper as you pass
Realm objects down the view hierarchy.
If you find that you need to directly update an attribute within a Realm
object within a view, then you can use this syntax to avoid having to
explicitly work with Realm transactions (where `reminder` is an
`@ObservedRealmObject`):
``` swift
$reminder.isCompleted.wrappedValue.toggle()
```
### 6. Migrate Your Queries
In its most basic implementation, Core Data uses the concept of fetch
requests in order to retrieve data from disk. A fetch can filter and
sort the objects:
``` swift
var reminders = FetchRequest(
entity: Reminder.entity(),
sortDescriptors: NSSortDescriptor(key: "title", ascending: true),
predicate: NSPredicate(format: "%K == %@", "list.title", title)).wrappedValue
```
The equivalent code for such a query using Realm is very similar, but it
uses the `@ObservedResults` property wrapper rather than `FetchRequest`:
``` swift
@ObservedResults(
Reminder.self,
filter: NSPredicate(format: "%K == %@", "list.title", title),
sortDescriptor: SortDescriptor(keyPath: "title", ascending: true)) var reminders
```
### 7. Migrate Your Users' Production Data
Once all of your code has been migrated to Realm, there's one more
outstanding issue: How do you migrate any production data that users may
already have on their devices out of Core Data and into Realm?
This can be a very complex issue. Depending on your app's functionality,
as well as your users' circumstances, how you go about handling this can
end up being very different each time.
We've seen two major approaches:
- Once you've migrated your code to Realm, you can re-link the Core
Data framework back into your app, use raw NSManagedObject objects
to fetch your users' data from Core Data, and then manually pass it
over to Realm. You can leave this migration code in your app
permanently, or simply remove it after a sufficient period of time
has passed.
- If the user's data is replaceable—for example, if it is simply
cached information that could be regenerated by other user data on
disk—then it may be easier to simply blow all of the Core Data save
files away, and start from scratch when the user next opens the app.
This needs to be done with very careful consideration, or else it
could end up being a bad user experience for a lot of people.
## SwiftUI Previews
As with Core Data, your SwiftUI previews can add some data to Realm so
that it's rendered in the preview. However, with Realm it's a lot easier
as you don't need to mess with contexts and view contexts:
``` swift
func bootstrapReminder() {
do {
let realm = try Realm()
try realm.write {
realm.deleteAll()
let reminder = Reminder()
reminder.title = "Do something"
reminder.notes = "Anything will do"
reminder.dueDate = Date()
reminder.priority = 1
realm.add(list)
}
} catch {
print("Failed to bootstrap the default realm")
}
}
struct ReminderListView_Previews: PreviewProvider {
static var previews: some View {
bootstrapReminder()
return ReminderListView()
}
}
```
## Syncing Realm Data
Now that your application data is stored in Realm, you have the option
to sync that data to other devices (including Android) using MongoDB
Realm Sync. That same data is
then stored in Atlas where it can be queried by web applications via
GraphQL or Realm's web
SDK.
This enhanced functionality is beyond the scope of this guide, but you
can see how it can be added by reading the
Building a Mobile Chat App Using Realm – Integrating Realm into Your App series.
## Conclusion
Thanks to their similarities in exposing data through model objects,
converting an app from using Core Data to Realm is very quick and
simple.
In this guide, we've focussed on the code that needs to be changed to
work with Realm, but you'll be pleasantly surprised at just how much
Core Data boilerplate code you're able to simply delete!
If you've been having trouble getting Core Data working in your app, or
you're looking for a way to sync data between platforms, we strongly
recommend giving Realm a try, to see if it works for you. And if it
does, please be sure to let us know!
If you've any questions or comments, then please let us know on our
community
forum.
>
>
>If you have questions, please head to our developer community
>website where the MongoDB engineers and
>the MongoDB community will help you build your next big idea with
>MongoDB.
>
>
| md | {
"tags": [
"Realm",
"Swift",
"iOS"
],
"pageDescription": "A guide to porting a SwiftUI iOS app from Core Data to MongoDB.",
"contentType": "Tutorial"
} | Migrating a SwiftUI iOS App from Core Data to Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/advanced-data-api-with-atlas-cli | created | # Mastering the Advanced Features of the Data API with Atlas CLI
The MongoDB Atlas Data API allows you to easily access and manipulate your data stored in Atlas using standard HTTPS requests. To utilize the Data API, all you need is an HTTPS client (like curl or Postman) and a valid API key. In addition to the standard functionality, the Data API now also offers advanced security and permission options, such as:
- Support for various authentication methods, including JWT and email/password.
- Role-based access control, which allows you to configure rules for user roles to restrict read and write access through the API.
- IP Access List, which allows you to specify which IP addresses are permitted to make requests to the API.
The Atlas Data API also offers a great deal of flexibility and customization options. One of these features is the ability to create custom endpoints, which enable you to define additional routes for the API, giving you more control over the request method, URL, and logic. In this article, we will delve deeper into these capabilities and explore how they can be utilized. All this will be done using the new Atlas CLI, a command-line utility that makes it easier to automate Atlas cluster management.
If you want to learn more about how to get started with the Atlas Data API, I recommend reading Accessing Atlas Data in Postman with the Data API.
## Installing and configuring an Atlas project and cluster
For this tutorial, we will need the following tools:
- atlas cli
- realm cli
- curl
- awk (or gawk for Windows)
- jq
I have already set up an organization called `MongoDB Blog` in the Atlas cloud, and I am currently using the Atlas command line interface (CLI) to display the name of the organization.
```bash
atlas login
```
```bash
atlas organizations list
ID NAME
62d2d54c6b03350a26a8963b MongoDB Blog
```
I set a variable `ORG_ID ` with the Atlas organization id.
```bash
ORG_ID=$(atlas organizations list|grep Blog|awk '{print $1}')
```
I also created a project within the `MongoDB Blog` organization. To create a project, you can use `atlas project create`, and provide it with the name of the project and the organization in which it should live. The project will be named `data-api-blog `.
```bash
PROJECT_NAME=data-api
PROJECT_ID=$(atlas project create "${PROJECT_NAME}" --orgId "${ORG_ID}" | awk '{print $2}' | tr -d "'")
```
I will also deploy a MongoDB cluster within the project `data-api-blog ` on Google Cloud (free M0 trier). The cluster will be named `data-api`.
```bash
CLUSTER_NAME=data-api-blog
atlas cluster create "${CLUSTER_NAME}" --projectId "${PROJECT_ID}" --provider GCP --region CENTRAL_US --tier M0
```
After a few minutes, the cluster is ready. You can view existing clusters with the `atlas clusters list` command.
```bash
atlas clusters list --projectId "${PROJECT_ID}"
```
```bash\
ID NAME MDB VER STATE\
63b877a293bb5618ab7c373b data-api 5.0.14 IDLE
```
The next step is to load a sample data set. Wait a few minutes while the dataset is being loaded. I need this dataset to work on the query examples.
```bash
atlas clusters loadSampleData "${CLUSTER_NAME}" --projectId "${PROJECT_ID}"
```
Good practice is also to add the IP address to the Atlas project access list
```bash
atlas accessLists create --currentIp --projectId "${PROJECT_ID}"
```
## Atlas App Services (version 3.0)
The App Services API allows for programmatic execution of administrative tasks outside of the App Services UI. This includes actions such as modifying authentication providers, creating rules, and defining functions. In this scenario, I will be using the App Services API to programmatically create and set up the Atlas Data API.
Using the `atlas organizations apiKeys` with the Atlas CLI, you can create and manage your organization keys. To begin with, I will create an API key that will belong to the organization `MongoDB Blog`.
```bash
API_KEY_OUTPUT=$(atlas organizations apiKeys create --desc "Data API" --role ORG_OWNER --orgId "${ORG_ID}")
```
Each request made to the App Services Admin API must include a valid and current authorization token from the MongoDB Cloud API, presented as a bearer token in the Authorization header. In order to get one, I need the `PublicKey`and `PrivateKey` returned by the previous command.
```bash
PUBLIC_KEY=$(echo $API_KEY_OUTPUT | awk -F'Public API Key ' '{print $2}' | awk '{print $1}' | tr -d '\n')
PRIVATE_KEY=$(echo $API_KEY_OUTPUT | awk -F'Private API Key ' '{print $2}' | tr -d '\n')
```
> NOTE \
> If you are using a Windows machine, you might have to manually create those two environment variables. Get the API key output by running the following command.
> `echo $API_KEY_OUTPUT`
> Then create the API key variables with the values from the output.
> `PUBLIC_KEY=`
> `PRIVATE_KEY=`
Using those keys, I can obtain an access token.
```bash
curl --request POST --header 'Content-Type: application/json'--header 'Accept: application/json'--data "{\"username\": \"$PUBLIC_KEY\", \"apiKey\": \"$PRIVATE_KEY\"}" https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login | jq -r '.access_token'> token.txt
```
Then, using the access token, I create a new application type `data-api` in the Atlas Application Service. My application will be named `data-api-blog`.
```bash
ACCESS_TOKEN=$(cat token.txt)
DATA_API_NAME=data-api-blog
BASE_URL="https://realm.mongodb.com/api/admin/v3.0"
curl --request POST
--header "Authorization: Bearer $ACCESS_TOKEN"
"${BASE_URL}"/groups/"${PROJECT_ID}"/apps?product=data-api
--data '{
"name": "'"$DATA_API_NAME"'",
"deployment_model": "GLOBAL",
"environment": "development",
"data_source": {
"name": "'"$DATA_API_NAME"'",
"type": "mongodb-atlas",
"config": {
"clusterName": "'"$CLUSTER_NAME"'"
}
}
}'
```
The application is visible now through Atlas UI, in the App Services tab.
I can also display our new application using the `realm cli`tool. The `realm-cli` command line utility is used to manage the App Services applications. In order to start using the `realm cli` tool, I have to log into the Atlas Application Services.
```bash
realm-cli login --api-key "$PUBLIC_KEY" --private-api-key "$PRIVATE_KEY"
```
Now, I can list my application with `realm-cli apps list`, assign the id to the variable, and use it later. In this example, the Data API application has a unique id: `data-api-blog-rzuzf`. (The id of your app will be different.)
```bash
APP_ID=$(realm-cli apps list | awk '{print $1}'|grep data)
```
## Configure and enable the Atlas Data API
By default, the Atlas Data API is disabled, I will now have to enable the Data API. It can be done through the Atlas UI, however, I want to show you how to do it using the command line.
### Export an existing app
Let's enhance the application by incorporating some unique settings and ensuring that it can be accessed from within the Atlas cluster. I will pull my data-api application on my local device.
```bash
realm-cli pull --remote="${APP_ID}"
```
Each component of an Atlas App Services app is fully defined and configured using organized JSON configuration files and JavaScript source code files. To get more information about app configuration, head to the docs. Below, I display the comprehensive directories tree.
|
```bash
data-api-blog
├── auth
│ ├── custom_user_data.json
│ └── providers.json
├── data_sources
│ └── data-api-blog
│ └── config.json
├── environments
│ ├── development.json
│ ├── no-environment.json
│ ├── production.json
│ ├── qa.json
│ └── testing.json
├── functions
│ └── config.json
├── graphql
│ ├── config.json
│ └── custom_resolvers
├── http_endpoints
│ └── config.json
├── log_forwarders
├── realm_config.json
├── sync
│ └── config.json
└── values
```
I will modify the `data_api_config.json`file located in the `http_endpoints` directory. This file is responsible for enabling the Atlas Data API.
I paste the document below into the `data_api_config.json`file. Note that to activate the Atlas Data API, I will set the `disabled`option to `false`. I also set `create_user_on_auth` to `true` If your linked function is using application authentication and custom JWT authentication, the endpoint will create a new user with the passed-in JWT if that user has not been created yet.
_data-api-blog/http_endpoints/data_api_config.json_
```bash
{
"versions":
"v1"
],
"disabled": false,
"validation_method": "NO_VALIDATION",
"secret_name": "",
"create_user_on_auth": true,
"return_type": "JSON"
}
```
### Authentication providers
The Data API now supports new layers of configurable data permissioning and security, including new authentication methods, such as [JWT authentication or email/password, and role-based access control, which allows for the configuration of rules for user roles that control read and write access through the API. Let's start by activating authentication using JWT tokens.
#### JWT tokens
JWT (JSON Web Token) is a compact, URL-safe means of representing claims to be transferred between two parties. It is often used for authentication and authorization purposes.
- They are self-contained, meaning they contain all the necessary information about the user, reducing the need for additional requests to the server.
- They can be easily passed in HTTP headers, which makes them suitable for API authentication and authorization.
- They are signed, which ensures that the contents have not been tampered with.
- They are lightweight and can be easily encoded/decoded, making them efficient to transmit over the network.
A JWT key is a secret value used to sign and verify the authenticity of a JWT token. The key is typically a long string of characters or a file that is securely stored on the server. I will pick a random key and create a secret in my project using Realm CLI.
```bash
KEY=thisisalongsecretkeywith32pluscharacters
SECRET_NAME=data-secret
realm-cli secrets create -a "${APP_ID}" -n "${SECRET_NAME}" -v "${KEY}"
```
I list my secret.
```bash
realm-cli secrets list -a "${APP_ID}"
```
```bash
Found 1 secrets
ID Name
------------------------ -----------
63d58aa2b10e93a1e3a45db1 data-secret
```
Next, I enable the use of two Data API authentication providers: traditional API key and JWT tokens. JWT token auth needs a secret created in the step above. I declare the name of the newly created secret in the configuration file `providers.json` located in the `auth` directory.
I paste this content into `providers.json` file. Note that I set the `disabled` option in both providers `api-key` and `custom-token` to `false`.
_auth/providers.json_
```bash
{
"api-key": {
"name": "api-key",
"type": "api-key",
"disabled": false
},
"custom-token": {
"name": "custom-token",
"type": "custom-token",
"config": {
"audience": ],
"requireAnyAudience": false,
"signingAlgorithm": "HS256"
},
"secret_config": {
"signingKeys": [
"data-secret"
]
},
"disabled": false
}
}
```
### Role-based access to the Data API
For each cluster, we can set high-level access permissions (Read-Only, Read & Write, No Access) and also set [custom role-based access-control (App Service Rules) to further control access to data (cluster, collection, document, or field level).
By default, all collections have no access, but I will create a custom role and allow read-only access to one of them. In this example, I will allow read-only access to the `routes` collection in the `sample_training` database.
In the `data_sources` directory, I create directories with the name of the database and collection, along with a `rules.json` file, which will contain the rule definition.
_data_sources/data-api-blog/sample_training/routes/rules.json_
```bash
{
"collection": "routes",
"database": "sample_training",
"roles":
{
"name": "readAll",
"apply_when": {},
"read": true,
"write": false,
"insert": false,
"delete": false,
"search": true
}
]
}
```
It's time to deploy our settings and test them in the Atlas Data API. To deploy changes, I must push Data API configuration files to the Atlas server.
```bash
cd data-api-blog/
realm-cli push --remote "${APP_ID}"
```
![The URL endpoint is ready to use, and the custom rule is configured
Upon logging into the Data API UI, we see that the interface is activated, the URL endpoint is ready to use, and the custom rule is configured.
Going back to the `App Services` tab, we can see that two authentication providers are now enabled.
### Access the Atlas Data API with JWT token
I will send a query to the MongoDB database now using the Data API interface. As the authentication method, I will choose the JWT token. I need to first generate an access token. I will do this using the website . The audience (`aud`) for this token will need to be the name of the Data API application. I can remind myself of the unique name of my Data API by printing the `APP_ID`environment variable. I will need this name when creating the token.
```bash
echo ${APP_ID}
```
In the `PAYLOAD`field, I will place the following data. Note that I placed the name of my Data API in the `aud` field. It is the audience of the token. By default, App Services expects this value to be the unique app name of your app.
```bash
{
"sub": "1",
"name": "The Atlas Data API access token",
"iat": 1516239022,
"aud":"",
"exp": 1900000000
}
```
The signature portion of the JWT token is the secret key generated in one of the previous steps. In this example, the key is `thisisalongsecretkeywith32pluscharacters` . I will place this key in the `VERIFY SIGNATURE` field.
It will look like the screenshot below. The token has been generated and is visible in the top left corner.
I copy the token and place it in the `JWT`environment variable, and also create another variable called `ENDPOINT` with the Data API query endpoint. Now, finally, we can start making requests to the Atlas Data API. Since the access role was created for only one collection, my request will be related to it.
```bash
JWT=
DB=sample_training
COLL=routes
ENDPOINT=https://data.mongodb-api.com/app/"${APP_ID}"/endpoint/data/v1
curl --location --request POST $ENDPOINT'/action/findOne'
--header 'Access-Control-Request-Headers: *'
--header 'jwtTokenString: '$JWT
--header 'Content-Type: application/json'
--data-raw '{
"dataSource": "'"$DATA_API_NAME"'",
"database": "'"$DB"'",
"collection": "'"$COLL"'",
"filter": {}
}'
```
```bash
{"document":{"_id":"56e9b39b732b6122f877fa31","airline":{"id":410,"name":"Aerocondor","alias":"2B","iata":"ARD"},"src_airport":"CEK","dst_airport":"KZN","codeshare":"","stops":0,"airplane":"CR2"}}
```
>WARNING \
>If you are getting an error message along the lines of the following:
> `{"error":"invalid session: error finding user for endpoint","error_code":"InvalidSession","link":"..."}`
>Make sure that your JSON Web Token is valid. Verify that the audience (aud) matches your application id, that the expiry timestamp (exp) is in the future, and that the secret key used in the signature is the correct one.
You can see that this retrieved a single document from the routes collection.
### Configure IP access list
Limiting access to your API endpoint to only authorized servers is a simple yet effective way to secure your API. You can modify the list of allowed IP addresses by going to `App Settings` in the left navigation menu and selecting the `IP Access list` tab in the settings area. By default, all IP addresses have access to your API endpoint (represented by 0.0.0.0). To enhance the security of your API, remove this entry and add entries for specific authorized servers. There's also a handy button to quickly add your current IP address for ease when developing using your API endpoint. You can also add your custom IP address with the help of `realm cli`. I'll show you how!
I am displaying the current list of authorized IP addresses by running the `realm cli` command.
```bash
realm-cli accessList list
```
```bash
Found 1 allowed IP address(es) and/or CIDR block(s)
IP Address Comment
---------- -------
0.0.0.0/0
```
I want to restrict access to the Atlas Data API to only my IP address. Therefore, I am displaying my actual address and assigning the address into a variable `MY_IP`.
```bash
MY_IP=$(curl ifconfig.me)
```
Next, I add this address to the IP access list, which belongs to my application, and delete `0.0.0.0/0` entry.
```bash
realm-cli accessList create -a "${APP_ID}" --ip "${MY_IP}"
--comment "My current IP address"
realm-cli accessList delete -a "${APP_ID}" --ip "0.0.0.0/0"
```
The updated IP access list is visible in the Data API, App Services UI.
### Custom HTTPS endpoints
The Data API offers fundamental endpoint options for creating, reading, updating, and deleting, as well as for aggregating information.
Custom HTTPS endpoints can be created to establish specific API routes or webhooks that connect with outside services. These endpoints utilize a serverless function, written by you, to manage requests received at a specific URL and HTTP method. Communication with these endpoints is done via secure HTTPS requests, eliminating the need for installing databases or specific libraries. Requests can be made from any HTTP client.
I can configure the Data API custom HTTP endpoint for my app from the App Services UI or by deploying configuration files with Realm CLI. I will demonstrate a second method. My custom HTTP endpoint will aggregate, count, and sort all source airports from the collection `routes` from `sample_training` database and return the top three results. I need to change the `config.json` file from the `http_endpoint` directory, but before I do that, I need to pull the latest version of my app.
```bash
realm-cli pull --remote="${APP_ID}"
```
I name my custom HTTP endpoint `sumTopAirports` . Therefore, I have to assign this name to the `route` key and `function_name` key's in the `config.json` file.
_data-api-blog/http_endpoints/config.json_
```bash
{
"route": "/sumTopAirports",
"http_method": "GET",
"function_name": "sumTopAirports",
"validation_method": "NO_VALIDATION",
"respond_result": true,
"fetch_custom_user_data": false,
"create_user_on_auth": true,
"disabled": false,
"return_type": "EJSON"
}
]
```
I need to also write a custom function. [Atlas Functions run standard ES6+ JavaScript functions that you export from individual files. I create a `.js` file with the same name as the function in the functions directory or one of its subdirectories.
I then place this code in a newly created file. This code exports a function that aggregates data from the Atlas cluster `data-api-blog`, `sample_training` database, and collection `routes`. It groups, sorts, and limits the data to show the top three results, which are returned as an array.
_data-api-blog/functions/sumTopAirports.js_
```bash
exports = function({ query, headers, body }, response) {
const result = context.services
.get("data-api-blog")
.db("sample_training")
.collection("routes")
.aggregate(
{ $group: { _id: "$src_airport", count: { $sum: 1 } } },
{ $sort: { count: -1 } },
{ $limit: 3 }
])
.toArray();
return result;
};
```
Next, I push my changes to the Atlas.
```bash
realm-cli push --remote "${APP_ID}"
```
My custom HTTPS endpoint is now visible in the Atlas UI.
![custom HTTPS endpoint is now visible in the Atlas UI
I can now query the `sumTopAirports` custom HTTPS endpoint.
```bash
URL=https://data.mongodb-api.com/app/"${APP_ID}"/endpoint/sumTopAirports
curl --location --request GET $URL
--header 'Access-Control-Request-Headers: *'
--header 'jwtTokenString: '$JWT
--header 'Content-Type: application/json'
```
```bash
{"_id":"ATL","count":{"$numberLong":"909"}},{"_id":"ORD","count":{"$numberLong":"558"}},{"_id":"PEK","count":{"$numberLong":"535"}}]
```
Security is important when working with data because it ensures that confidential and sensitive information is kept safe and secure. Data breaches can have devastating consequences, from financial loss to reputational damage. Using the Atlas command line interface, you can easily extend the Atlas Data API with additional security features like JWT tokens, IP Access List, and custom role-based access-control. Additionally, you can use custom HTTPS functions to provide a secure, user-friendly, and powerful way for managing and accessing data. The Atlas platform provides a flexible and robust solution for data-driven applications, allowing users to easily access and manage data in a secure manner.
## Summary
MongoDB Atlas Data API allows users to access their MongoDB Atlas data from any platform and programmatically interact with it. With the API, developers can easily build applications that access data stored in MongoDB Atlas databases. The API provides a simple and secure way to perform common database operations, such as retrieving and updating data, without having to write custom code. This makes it easier for developers to get started with MongoDB Atlas and provides a convenient way to integrate MongoDB data into existing applications.
If you want to learn more about all the capabilities of the Data API, [check out our course over at MongoDB University. There are also multiple resources available on the Atlas CLI.
If you don't know how to start or you want to learn more, visit the MongoDB Developer Community forums! | md | {
"tags": [
"Atlas"
],
"pageDescription": "This article delves into the advanced features of the Data API, such as authentication and custom endpoints.",
"contentType": "Tutorial"
} | Mastering the Advanced Features of the Data API with Atlas CLI | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-single-collection-springpart1 | created | # Single-Collection Designs in MongoDB with Spring Data (Part 1)
Modern document-based NoSQL databases such as MongoDB offer advantages over traditional relational databases for many types of applications. One of the key benefits is data models that avoid the need for normalized data spread across multiple tables requiring join operations that are both computationally expensive and difficult to scale horizontally.
In the first part of this series, we will discuss single-collection designs — one of the design patterns used to realize these advantages in MongoDB. In Part 2, we will provide examples of how the single-collection pattern can be utilized in Java applications using Spring Data MongoDB.
## The ADSB air-traffic control application
In this blog post, we discuss a database design for collecting and analyzing Automatic Dependent Surveillance-Broadcast (ADSB) data transmitted by aircraft. ADSB is a component of a major worldwide modernization of air-traffic control systems that moves away from dependency on radar (which is expensive to maintain and has limited range) for tracking aircraft movement and instead has the aircraft themselves transmit their location, speed, altitude, and direction of travel, all based on approved Global Navigation Satellite Systems such as GPS, GLONASS, Galileo, and BeiDou. Find more information about ADSB.
A number of consumer-grade devices are available for receiving ADSB transmissions from nearby aircraft. These are used by pilots of light aircraft to feed data to tablet and smart-phone based navigation applications such as Foreflight. This provides a level of situational awareness and safety regarding the location of nearby flight traffic that previously was simply not available even to commercial airline pilots. Additionally, web-based aircraft tracking initiatives, such as the Opensky Network, depend on community-sourced ADSB data to build their databases used for numerous research projects.
Whilst most ADSB receivers retail in the high hundreds-of-dollars price range, the rather excellent Stratux open-source project allows a complete receiver system to be built using a Raspberry Pi and cheap USB Software Defined Radios (SDRs). A complete system can be built from parts totalling around $200 (1).
The Stratux receiver transmits data to listening applications either over a raw TCP/IP connection with messages adhering to the GDL90 specification designed and maintained by Garmin, or as JSON messages sent to subscribers to a websocket connection. In this exercise, we will simulate receiving messages from a Stratux receiver — **a working receiver is not a prerequisite for completing the exercises**. The database we will be building will track observed aircraft, the airlines they belong to, and the individual ADSB position reports picked up by our receiver.
In a traditional RDBMS-based system, we might settle on a normalized data model that looks like this:
Each record in the airline table can be joined to zero or more aircraft records, and each aircraft record can be joined to zero or more ADSB position reports. Whilst this model offers a degree of flexibility in terms of querying, queries that join across tables are computationally intensive and difficult to scale horizontally. In particular, consider that over 3000 commercial flights are handled per day by airports in the New York City area and that each of those flights are transmitting a new ADSB position report every second. With ADSB transmissions for a flight being picked up by the receiver for an average of 15 minutes until the aircraft moves out of range, an ADSB receiver in New York alone could be feeding over 2.5 million position reports per day into the system. With a network of ADSB receivers positioned at major hubs throughout the USA, the possibility of needing to be able to scale out could grow quickly.
MongoDB has been designed from the outset to be easy to scale horizontally. However, to do that, the correct design principles and patterns must be employed, one of which is to avoid unnecessary joins. In our case, we will be utilizing the *document data model*, *polymorphic collections*, and the *single-collection design pattern*. And whilst it’s common practice in relational database design to start by normalizing the data before considering access patterns, with document-centric databases such as MongoDB, you should always start by considering the access patterns for your data and work from there, using the guiding principle that *data that is accessed together should be stored together*.
In MongoDB, data is stored in JSON (2) like documents, organized into collections. In relational database terms, a document is analogous to a record whilst a collection is analogous to a table. However, there are some key differences to be aware of.
A document in MongoDB can be hierarchical, in that the value of any given attribute (column in relational terms) in a document may itself be a document or an array of values or documents. This allows for data to be stored in a single document within a collection in ways that tabular relational database designs can’t support and that would require data to be stored across multiple tables and accessed using joins. Consider our airline to aircraft one-to-many and aircraft to ADSB position report one-to-many relationships. In our relational model, this requires three tables joined using primary-foreign key relationships. In MongoDB, this could be represented by airline documents, with their associated aircraft embedded in the same document and the ADSB position reports for each aircraft further embedded in turn, all stored in a single collection. Such documents might look like this:
```
{
"_id": {
"$oid": "62abdd534e973de2fcbdc10d"
},
"airlineName": "Delta Air Lines",
"airlineIcao": "DAL",
...
"aircraft":
{
"icaoNumber": "a36f7e",
"tailNumber": "N320NB",
...
"positionReports": [
{
"msgNum": "1",
"altitude": 38825,
...
"geoPoint": {
"type": "Point",
"coordinates": [
-4.776722,
55.991776
]
},
},
{
"msgNum": "2",
...
},
{
"msgNum": "3",
...
}
]
},
{
"icaoNumber": "a93d7c",
...
},
{
"icaoNumber": "ab8379",
...
},
]
}
```
By embedding the aircraft information for each airline within its own document, all stored within a single collection, we are able retrieve information for an airline and all its aircraft using a single query and no joins:
```javascript
db.airlines.find({"airlineName": "Delta Air Lines"}
```
Embedded, hierarchical documents provide a great deal of flexibility in our data design and are consistent with our guiding principle that *data that is accessed together should be stored together*. However, there are some things to be aware of:
* For some airlines, the number of embedded aircraft documents could become large. This would be compounded by the number of embedded ADSB position reports within each associated aircraft document. In general, large, unbounded arrays are considered an anti-pattern within MongoDB as they can lead to excessively sized documents with a corresponding impact on update operations and data retrieval.
* There may be a need to access an individual airline or aircraft’s data independently of either the corresponding aircraft data or information related to other aircraft within the airline’s fleet. Whilst the MongoDB query aggregation framework allows for such shaping and projecting of the data returned by a query to do this, it would add extra processing overhead when carrying out such queries. Alternatively, the required data could be filtered out of the query returns within our application, but that might lead to unnecessary large data transmissions.
* Some aircraft may be operated privately, and not be associated with an airline.
One approach to tackling these problems would be to separate the airline, aircraft, and ADSB position report data into separate documents stored in three different collections with appropriate cross references (primary/foreign keys). In some cases, this might be the right approach (for example, if synchronizing data from mobile devices using [Realm). However, it comes at the cost of maintaining additional collections and indexes, and might necessitate the use of joins ($lookup stages in a MongoDB aggregation pipeline) when retrieving data. For some of our access patterns, this design would be violating our guiding principle that *data that is accessed together should be stored together*. Also, as the amount of data in an application grows and the need for scaling through sharding of data starts to become a consideration, having related data separated across multiple collections can complicate the maintenance of data across shards.
Another option would be to consider using *the Subset Pattern* which limits the number of embedded documents we maintain according to an algorithm (usually most recently received/accessed, or most frequently accessed), with the remaining documents stored in separate collections. This allows us to control the size of our hierarchical documents and in many workloads, cover our data retrieval and access patterns with a single query against a single collection. However, for our airline data use case, we may find that the frequency with which we are requesting all aircraft for a given airline, or all position reports for an aircraft (of which there could be many thousands), the subset pattern may still lead to many queries requiring joins.
One further solution, and the approach we’ll take in this article, is to utilize another feature of MongoDB: polymorphic collections. Polymorphic collections refer to the ability of collections to store documents of varying types. Unlike relational tables, where the columns of each table are pre-defined, a collection in MongoDB can contain documents of any design, with the only requirement being that every document must contain an “\_id” field containing a unique identifier. This ability has led some observers to describe MongoDB as being schemaless. However, it’s more correct to describe MongoDB as “schema-optional.” You *can* define restrictions on the design of documents that are accepted by a collection using JSON Schema, but this is optional and at the discretion of the application developers. By default, no restrictions are imposed. It’s considered best practice to only store documents that are in some way related and/or will be retrieved in a single operation within the same collection, but again, this is at the developers’ discretion.
Utilizing polymorphic collection in our aerodata example, we separate our Airline, Aircraft, and ADSB position report data into separate documents, but store them all within a *single collection.* Taking this approach, the documents in our collection may end up looking like this:
```JSON
{
"_id": "DAL",
"airlineName": "Delta Air Lines",
...
"recordType": 1
},
{
"_id": "DAL_a93d7c",
"tailNumber": "N695CA",
"manufacturer": "Bombardier Inc",
"model": "CL-600-2D24",
"recordType": 2
},
{
"_id": "DAL_ab8379",
"tailNumber": "N8409N",
"manufacturer": "Bombardier Inc",
"model": "CL-600-2B19",
"recordType": 2
},
{
"_id": "DAL_a36f7e",
"tailNumber": "N8409N",
"manufacturer": "Airbus Industrie",
"model": "A319-114",
"recordType": 2
},
{
"_id": "DAL_a36f7e_1",
"altitude": 38825,
. . .
"geoPoint": {
"type": "Point",
"coordinates":
-4.776722,
55.991776
]
},
"recordType": 3
},
{
"_id": "DAL_a36f7e_2",
"altitude": 38875,
...
"geoPoint": {
"type": "Point",
"coordinates": [
-4.781466,
55.994843
]
},
"recordType": 3
},
{
"_id": "DAL_a36f7e_3",
"altitude": 38892,
...
"geoPoint": {
"type": "Point",
"coordinates": [
-4.783344,
55.99606
]
},
"recordType": 3
}
```
There are a couple of things to note here. Firstly, with the airline, aircraft, and ADSB position reports separated into individual documents rather than embedded within each other, we can query for and return the different document types individually or in combination as needed.
Secondly, we have utilized a custom format for the “\_id” field in each document. Whilst the “\_id” field is always required in MongodB, the format of the value stored in the field can be anything as long as it’s unique within that collection. By default, if no value is provided, MongoDB will assign an objectID value to the field. However, there is nothing to prevent us using any value we wish, as long as care is taken to ensure each value used is unique. Considering that MongoDB will always maintain an index on the “\_id” field, it makes sense that we should use a value in the field that has some value to our application. In our case, the values are used to represent the hierarchy within our data. Airline document “\_id” fields contain the airline’s unique ICAO (International Civil Aviation Organization) code. Aircraft document “\_id” fields start with the owning airline’s ICAO code, followed by an underscore, followed by the aircraft’s own unique ICAO code. Finally, ADSB position report document “\_id” fields start with the airline ICAO code, an underscore, then the aircraft ICAO code, then a second underscore, and finally an incrementing message number.
Whilst we could have stored the airline and aircraft ICAO codes and ADSB message numbers in their own fields to support our queries, and in some ways doing so would be a simpler approach, we would have to create and maintain additional indexes on our collection against each field. Overloading the values in the “\_id” field in the way that we have avoids the need for those additional indexes.
Lastly, we have added a helper field called recordType to each document to aid filtering of searches. Airline documents have a recordType value of 1, aircraft documents have a recordType value of 2, and ADSB position report documents have a recordType value of 3. To maintain query performance, the positionType field should be indexed.
With these changes in place, and assuming we have placed all our documents in a collection named “aerodata”, we can now carry out the following range of queries:
Retrieve all documents related to Delta Air Lines:
```javascript
db.aerodata.find({"_id": /^DAL/})
```
Retrieve Delta Air Lines’ airline document on its own:
```javascript
db.aerodata.find({"_id": "DAL"})
```
Retrieve all aircraft documents for aircraft in Delta Air Lines’ fleet:
```javascript
db.aerodata.find({"_id": /^DAL_/, "recordType": 2})
```
Retrieve the aircraft document for Airbus A319 with ICAO code "a36f7e" on its own:
```javascript
db.aerodata.find({"_id": "DAL_a36f7e", "recordType": 2})
```
Retrieve all ADSB position report documents for Airbus A319 with ICAO code "a36f7e":
```javascript
db.aerodata.find({"_id": /^DAL_a36f7e/, "recordType": 3})
```
In each case, we are able to retrieve the data we need with a single query operation (requiring a single round trip to the database) against a single collection (and thus, no joins) — even in cases where we are returning multiple documents of different types. Note the use of regular expressions in some of the queries. In each case, our search pattern is anchored to the start of the field value being searched using the “^” hat symbol. This is important when performing a regular expression search as MongoDB can only utilize an index on the field being searched if the search pattern is anchored to the start of the field.
The following search will utilize the index on the “\_id” field:
```javascript
db.aerodata.find({"_id": /^DAL/})
```
The following search will **not** be able to utilize the index on the “\_id” field and will instead perform a full collection scan:
```javascript
db.aerodata.find({"_id": /DAL/})
```
In this first part of our two-part post, we have seen how polymorphic single-collection designs in MongoDB can provide all the query flexibility of normalized relational designs, whilst simultaneously avoiding anti-patterns, such as unbounded arrays and unnecessary joins. This makes the resulting collections highly performant from a search standpoint and amenable to horizontal scaling. In Part 2, we will show how we can work with these designs using Spring Data MongoDB in Java applications.
The example source code used in this series is [available on Github.
(1) As of October 2022, pandemic era supply chain issues have impacted Raspberry Pi availability and cost. However for anyone interested in building their own Stratux receiver, the following parts list will allow a basic system to be put together:
* USB SDR Radios
* Raspberry Pi Starter Kit
* SD Card
* GPS Receiver (optional)
(2) MongoDB stores data using BSON - a binary form of JSON with support for additional data types not supported by JSON. Get more information about the BSON specification. | md | {
"tags": [
"Java"
],
"pageDescription": "Learn how to avoid joins in MongoDB by using Single Collection design patterns, and access those patterns using Spring Data in Java applications.",
"contentType": "Tutorial"
} | Single-Collection Designs in MongoDB with Spring Data (Part 1) | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/customer-success-ruby-tablecheck | created | # TableCheck: Empowering Restaurants with Best-in-Class Booking Tools Powered by MongoDB
TableCheck is the world’s premiere booking and guest platform. Headquartered in Tokyo, they empower restaurants with tools to elevate their guest experience and create guests for life with features like booking forms, surveys, marketing automation tools and an ecosystem of powerful solutions for restaurants to take their business forward.
## Architectural overview of TableCheck
Launched in 2013, TableCheck began life as a Ruby on Rails monolith. Over time, the solution has been expanded to include satellite microservices. However, one constant that has remained throughout this journey was MongoDB.
Originally, TableCheck managed their own MongoDB Enterprise clusters. However, once MongoDB Atlas became available, they migrated their data to a managed replica set running in AWS.
According to CTO Johnny Shields, MongoDB was selected initially as the database of choice for TableCheck as it was _"love at first sight"_. Though MongoDB was a much different solution in 2013, even in the database product’s infancy, it fit perfectly with their development workflow and allowed them to work with their data easily and quickly while building out their APIs and application.
## Ruby on Rails + MongoDB
Any developer familiar with Ruby on Rails knows that the ORM layer (via Active Record) was designed to support relational databases. MongoDB’s Mongoid ODM acts as a veritable "drop-in" replacement for existing Active Record adapters so that MongoDB can be used seamlessly with Rails. The CRUD API is familiar to Ruby on Rails developers and makes working with MongoDB extremely easy.
When asked if MongoDB and Ruby were a good fit, Johnny Shields replied:
> _"Yes, I’d add the combo of MongoDB + Ruby + Rails + Mongoid is a match made in heaven. Particularly with the Mongoid ORM library, it is easy to get MongoDB data represented in native Ruby data structures, e.g. as nested arrays and objects"._
This has allowed TableCheck to ensure MongoDB remains the "golden-source" of data for the entire platform. They currently replicate a subset of data to Elasticsearch for deep multi-field search functionality. However, given the rising popularity and utility of Atlas Search, this part of the stack may be further simplified.
As MongoDB data changes within the TableCheck platform, these changes are broadcast over Apache Kafka via the MongoDB Kafka Connector to enable downstream services to consume it. Several of their microservices are built in Elixir, including a data analytics application. PostgreSQL is being used for these data analytics use cases as the only MongoDB Drivers for Elixir and managed by the community (such as `elixir-mongo/mongodb` or `zookzook/elixir-mongodb-driver`). However, should an official Driver surface, this decision may change.
## Benefits of the Mongoid ODM for Ruby on Rails development
The "killer feature" for new users discovering Ruby on Rails is Active Record Migrations. This feature of Active Record provides a DSL that enables developers to manage their relational database’s schema without having to write a single line of SQL. Because MongoDB is a NoSQL database, migrations and schema management are unnecessary!
Johnny Shields shares the following based on his experience working with MongoDB and Ruby on Rails:
> _"You can add or remove data fields without any need to migrate your database. This alone is a "killer-feature" reason to choose MongoDB. You do still need to consider database indexes however, but MongoDB Atlas has a profiler which will monitor for slow queries and auto-suggest if any index is needed."_
As the Mongoid ODM supports large portions of the Active Record API, another powerful productivity feature TableCheck was able to leverage is the use of Associations. Cross-collection referenced associations are available. However, unlike relational databases, embedded associations can be used to simplify the data model.
## Open source and community strong
Both `mongodb/mongoid` and `mongodb/mongo-ruby-driver` are licensed under OSI approved licenses and MongoDB encourages the community to contribute feedback, issues, and pull requests!
Since 2013, the TableCheck team has contributed nearly 150 PRs to both projects. The majority tend to be quality-of-life improvements and bug fixes related to edge-case combinations of various methods/options. They’ve also helped improve the accuracy of documentation in many places, and have even helped the MongoDB Ruby team setup Github Actions so that it would be easier for outsiders to contribute.
With so many contributions under their team’s belt, and clearly able to extend the Driver and ODM to fit any use case the MongoDB team may not have envisioned, when asked if there were any use-cases MongoDB couldn’t satisfy within a Ruby on Rails application, the feedback was:
> _"I have not encountered any use case where I’ve felt SQL would be a fundamentally better solution than MongoDB. On the contrary, we have several microservices which we’ve started in SQL and are moving to MongoDB now wherever we can."_
The TableCheck team are vocal advocates for things like better changelogs and more discipline in following semantic versioning best practices. These have benefited the community greatly, and Johnny and team continue to advocate for things like adopting static code analysis (ex: via Rubocop) to improve overall code quality and consistency.
## Overall thoughts on working with MongoDB and Ruby on Rails
TableCheck has been a long-time user of MongoDB via the Ruby driver and Mongoid ODM, and as a result has experienced some growing pains as the data platform matured. When asked about any challenges his team faced working with MongoDB over the years, Johnny replied:
> _"The biggest challenge was that in earlier MongoDB versions (3.x) there were a few random deadlock-type bugs in the server that bit us. These seemed to have gone away in newer versions (4.0+). MongoDB has clearly made an investment in core stability which we have benefitted from first-hand. Early on we were maintaining our own cluster, and from a few years ago we moved to Atlas and MongoDB now does much of the maintenance for us"._
We at MongoDB continue to be impressed by the scope and scale of the solutions our users and customers like TableCheck continue to build. Ruby on Rails continues to be a viable framework for enterprise and best-in-class applications, and our team will continue to grow the product to meet the needs of the next generation of Ruby application developers.
Johnny presented at MongoDB Day Singapore on November 23, 2022 (view presentation). His talk covered a number of topics, including his experiences working with MongoDB and Ruby. | md | {
"tags": [
"MongoDB",
"Ruby"
],
"pageDescription": "TableCheck's CTO Johnny Shields discusses their development experience working with the MongoDB Ruby ODM (mongoid) and how they accelerated and streamlined their development processes with these tools.",
"contentType": "Article"
} | TableCheck: Empowering Restaurants with Best-in-Class Booking Tools Powered by MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/saving-data-in-unity3d-using-playerprefs | created | # Saving Data in Unity3D Using PlayerPrefs
*(Part 1 of the Persistence Comparison Series)*
Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.
In this tutorial series, we will explore the options given to us by Unity and third-party libraries. Each part will take a deeper look into one of them with the final part being a comparison:
- Part 1: PlayerPrefs *(this tutorial)*
- Part 2: Files
- Part 3: BinaryReader and BinaryWriter *(coming soon)*
- Part 4: SQL
- Part 5: Realm Unity SDK
- Part 6: Comparison of all these options
To make it easier to follow along, we have prepared an example repository for you. All those examples can be found within the same Unity project since they all use the same example game, so you can see the differences between those persistence approaches better.
The repository can be found at https://github.com/realm/unity-examples, with this tutorial being on the persistence-comparison branch next to other tutorials we have prepared for you.
## Example game
*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we are using the same example for all parts of the series so that it is easier to see the differences between the approaches.*
The goal of this tutorial series is to show you a quick and easy way to take some first steps in the various ways to persist data in your game.
Therefore, the example we will be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.
A simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.
When you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.
You can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.
The scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.
```cs
using UnityEngine;
///
/// This script shows the basic structure of all other scripts.
///
public class HitCountExample : MonoBehaviour
{
// Keep count of the clicks.
SerializeField] private int hitCount; // 1
private void Start() // 2
{
// Read the persisted data and set the initial hit count.
hitCount = 0; // 3
}
private void OnMouseDown() // 4
{
// Increment the hit count on each click and save the data.
hitCount++; // 5
}
}
```
The first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerializeField]` here so that you can observe it while clicking on the capsule in the Unity editor.
Whenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.
The second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series.
## PlayerPrefs
(See `PlayerPrefsExampleSimple.cs` in the repository for the finished version.)
The easiest and probably most straightforward way to save data in Unity is using the built-in [`PlayerPrefs`. The downside, however, is the limited usability since only three data types are supported:
- string
- float
- integer
Another important fact about them is that they save data in plain text, which means a player can easily change their content. `PlayerPrefs` should therefore only be used for things like graphic settings, user names, and other data that could be changed in game anyway and therefore does not need to be safe.
Depending on the operating system the game is running on, the `PlayerPrefs` get saved in different locations. They are all listed in the documentation. Windows, for example, uses the registry to save the data under `HKCU\Software\ExampleCompanyName\ExampleProductName`.
The usage of `PlayerPrefs` is basically the same as a dictionary. They get accessed as `key`/`value` pairs where the `key` is of type `string`. Each supported data type has its own function:
- SetString(key, value)
- GetString(key)
- SetFloat(key, value)
- GetFloat(key)
- SetInt(key, value)
- GetInt(key)
```cs
using UnityEngine;
public class PlayerPrefsExampleSimple : MonoBehaviour
{
// Resources:
// https://docs.unity3d.com/ScriptReference/PlayerPrefs.html
SerializeField] private int hitCount = 0;
private const string HitCountKey = "HitCountKey"; // 1
private void Start()
{
// Check if the key exists. If not, we never saved the hit count before.
if (PlayerPrefs.HasKey(HitCountKey)) // 2
{
// Read the hit count from the PlayerPrefs.
hitCount = PlayerPrefs.GetInt(HitCountKey); // 3
}
}
private void OnMouseDown()
{
hitCount++;
// Set and save the hit count before ending the game.
PlayerPrefs.SetInt(HitCountKey, hitCount); // 4
PlayerPrefs.Save(); // 5
}
}
```
For the `PlayerPrefs` example, we create a script named `PlayerPrefsExampleSimple` based on the `HitCountExample` shown earlier.
In addition to the basic structure, we also need to define a key (1) that will be used to save the `hitCount` in the `PlayerPrefs`. Let's call it `"HitCountKey"`.
When the game starts, we first want to check if there was already a hit count saved. The `PlayerPrefs` have a built-in function `HasKey(hitCountKey)` (2) that let's us achieve exactly this. If the key exists, we read it using `GetInt(hitCountKey)` (3) and save it in the counter.
The second part is saving data whenever it changes. On each click after we incremented the `hitCount`, we have to call `SetInt(key, value)` on `PlayerPrefs` (4) to set the new data. Note that this does not save the data to disk. This only happens during `OnApplicationQuit()` implicitly. We can explicitly write the data to disk at any time to avoid losing data in case the game crashes and `OnApplicationQuit()` never gets called.
To write the data to disk, we call `Save()` (5).
## Extended example
(See `PlayerPrefsExampleExtended.cs` in the repository for the finished version.)
In the second part of this tutorial, we will extend this very simple version to look at ways to save more complex data within `PlayerPrefs`.
Instead of just detecting a mouse click, the extended script will detect `Shift+Click` and `Ctrl+Click` as well.
Again, to visualize this in the editor, we will add some more `[SerializeFields]` (1). Substitute the current one (`hitCount`) with the following:
```cs
// 1
[SerializeField] private int hitCountUnmodified = 0;
[SerializeField] private int hitCountShift = 0;
[SerializeField] private int hitCountControl = 0;
```
Each type of click will be shown in its own `Inspector` element.
The same has to be done for the `PlayerPrefs` keys. Remove the `HitCountKey` and add three new elements (2).
```cs
// 2
private const string HitCountKeyUnmodified = "HitCountKeyUnmodified";
private const string HitCountKeyShift = "HitCountKeyShift";
private const string HitCountKeyControl = "HitCountKeyControl";
```
There are many different ways to save more complex data. Here we will be using three different entries in `PlayerPrefs` as a first step. Later, we will also look at how we can save structured data that belongs together in a different way.
One more field we need to save is the `KeyCode` for the key that was pressed:
```cs
// 3
private KeyCode modifier = default;
```
When starting the scene, loading the data looks similar to the previous example, just extended by two more calls:
```cs
private void Start()
{
// Check if the key exists. If not, we never saved the hit count before.
if (PlayerPrefs.HasKey(HitCountKeyUnmodified)) // 4
{
// Read the hit count from the PlayerPrefs.
hitCountUnmodified = PlayerPrefs.GetInt(HitCountKeyUnmodified); // 5
}
if (PlayerPrefs.HasKey(HitCountKeyShift)) // 4
{
// Read the hit count from the PlayerPrefs.
hitCountShift = PlayerPrefs.GetInt(HitCountKeyShift); // 5
}
if (PlayerPrefs.HasKey(HitCountKeyControl)) // 4
{
// Read the hit count from the PlayerPrefs.
hitCountControl = PlayerPrefs.GetInt(HitCountKeyControl); // 5
}
}
```
As before, we first check if the key exists in the `PlayerPrefs` (4) and if so, we set the corresponding counter (5) to its value. This is fine for a simple example but here, you can already see that saving more complex data will bring `PlayerPrefs` very soon to its limits if you do not want to write a lot of boilerplate code.
Unity offers a detection for keyboard clicks and other input like a controller or the mouse via a class called [`Input`. Using `GetKey`, we can check if a specific key was held down the moment we register a mouse click.
The documentation tells us about one important fact though:
> Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop.
Therefore, we also need to implement the `Update()` function (6) where we check for the key and save it in the previously defined `modifier`.
The keys can be addressed via their name as string but the type safe way to do this is to use the class `KeyCode`, which defines every key necessary. For our case, this would be `KeyCode.LeftShift` and `KeyCode.LeftControl`.
Those checks use `Input.GetKey()` (7) and if one of the two was found, it will be saved as the `modifier` (8). If neither of them was pressed (9), we just reset `modifier` to the `default` (10) which we will use as a marker for an unmodified mouse click.
```cs
private void Update() // 6
{
// Check if a key was pressed.
if (Input.GetKey(KeyCode.LeftShift)) // 7
{
// Set the LeftShift key.
modifier = KeyCode.LeftShift; // 8
}
else if (Input.GetKey(KeyCode.LeftControl)) // 7
{
// Set the LeftControl key.
modifier = KeyCode.LeftControl; // 8
}
else // 9
{
// In any other case reset to default and consider it unmodified.
modifier = default; // 10
}
}
```
The same triplet can then also be found in the click detection:
```cs
private void OnMouseDown()
{
// Check if a key was pressed.
switch (modifier)
{
case KeyCode.LeftShift: // 11
// Increment the hit count and set it to PlayerPrefs.
hitCountShift++; // 12
PlayerPrefs.SetInt(HitCountKeyShift, hitCountShift); // 15
break;
case KeyCode.LeftControl: // 11
// Increment the hit count and set it to PlayerPrefs.
hitCountControl++; //
PlayerPrefs.SetInt(HitCountKeyControl, hitCountControl); // 15
break;
default: // 13
// Increment the hit count and set it to PlayerPrefs.
hitCountUnmodified++; // 14
PlayerPrefs.SetInt(HitCountKeyUnmodified, hitCountUnmodified); // 15
break;
}
// Persist the data to disk.
PlayerPrefs.Save(); // 16
}
```
First we check if one of those two was held down while the click happened (11) and if so, increment the corresponding hit counter (12). If not (13), the `unmodfied` counter has to be incremented (14).
Finally, we need to set each of those three counters individually (15) via `PlayerPrefs.SetInt()` using the three keys we defined earlier.
Like in the previous example, we also call `Save()` (16) at the end to make sure data does not get lost if the game does not end normally.
When switching back to the Unity editor, the script on the capsule should now look like this:
## More complex data
(See `PlayerPrefsExampleJson.cs` in the repository for the finished version.)
In the previous two sections, we saw how to handle two simple examples of persisting data in `PlayerPrefs`. What if they get more complex than that? What if you want to structure and group data together?
One possible approach would be to use the fact that `PlayerPrefs` can hold a `string` and save a `JSON` in there.
First we need to figure out how to actually transform our data into JSON. The .NET framework as well as the `UnityEngine` framework offer a JSON serializer and deserializer to do this job for us. Both behave very similarly, but we will use Unity's own `JsonUtility`, which performs better in Unity than other similar JSON solutions.
To transform data to JSON, we first need to create a container object. This has some restriction:
> Internally, this method uses the Unity serializer. Therefore, the object you pass in must be supported by the serializer. It must be a MonoBehaviour, ScriptableObject, or plain class/struct with the Serializable attribute applied. The types of fields that you want to be included must be supported by the serializer; unsupported fields will be ignored, as will private fields, static fields, and fields with the NonSerialized attribute applied.
In our case, since we are only saving simple data types (int) for now, that's fine. We can define a new class (1) and call it `HitCount`:
```cs
// 1
private class HitCount
{
public int Unmodified;
public int Shift;
public int Control;
}
```
We will keep the Unity editor outlets the same (2):
```cs
// 2
SerializeField] private int hitCountUnmodified = 0;
[SerializeField] private int hitCountShift = 0;
[SerializeField] private int hitCountControl = 0;
```
All those will eventually be saved into the same `PlayerPrefs` field, which means we only need one key (3):
```cs
// 3
private const string HitCountKey = "HitCountKeyJson";
```
As before, the `modifier` will indicate which modifier was used:
```cs
// 4
private KeyCode modifier = default;
```
In `Start()`, we then need to read the JSON. As before, we check if the `PlayerPrefs` key exists (5) and then read the data, this time using `GetString()` (as opposed to `GetInt()` before).
Transforming this JSON into the actual object is then done using `JsonUtility.FromJson()` (6), which takes the string as an argument. It's a generic function and we need to provide the information about which object this JSON is supposed to be representing—in this case, `HitCount`.
If the JSON can be read and transformed successfully, we can set the hit count fields (7) to their three values.
```cs
private void Start()
{
// 5
// Check if the key exists. If not, we never saved to it.
if (PlayerPrefs.HasKey(HitCountKey))
{
// 6
var jsonString = PlayerPrefs.GetString(HitCountKey);
var hitCount = JsonUtility.FromJson(jsonString);
// 7
if (hitCount != null)
{
hitCountUnmodified = hitCount.Unmodified;
hitCountShift = hitCount.Shift;
hitCountControl = hitCount.Control;
}
}
}
```
The detection for the key that was pressed is identical to the extended example since it does not involve loading or saving any data but is just a check for the key during `Update()`:
```cs
private void Update() // 8
{
// Check if a key was pressed.
if (Input.GetKey(KeyCode.LeftShift)) // 9
{
// Set the LeftShift key.
modifier = KeyCode.LeftShift; // 10
}
else if (Input.GetKey(KeyCode.LeftControl)) // 9
{
// Set the LeftControl key.
modifier = KeyCode.LeftControl; // 10
}
else // 11
{
// In any other case reset to default and consider it unmodified.
modifier = default; // 12
}
}
```
In a very similar fashion, `OnMouseDown()` needs to save the data whenever it's changed.
```cs
private void OnMouseDown()
{
// Check if a key was pressed.
switch (modifier)
{
case KeyCode.LeftShift: // 13
// Increment the hit count and set it to PlayerPrefs.
hitCountShift++; // 14
break;
case KeyCode.LeftControl: // 13
// Increment the hit count and set it to PlayerPrefs.
hitCountControl++; // 14
break;
default: // 15
// Increment the hit count and set it to PlayerPrefs.
hitCountUnmodified++; // 16
break;
}
// 17
var updatedCount = new HitCount
{
Unmodified = hitCountUnmodified,
Shift = hitCountShift,
Control = hitCountControl,
};
// 18
var jsonString = JsonUtility.ToJson(updatedCount);
PlayerPrefs.SetString(HitCountKey, jsonString);
PlayerPrefs.Save();
}
```
Compared to before, you see that checking the key and increasing the counter (13 - 16) is basically unchanged except for the save part that is now a bit different.
First, we need to create a new `HitCount` object (17) and assign the three counts. Using `JsonUtility.ToJson()`, we can then (18) create a JSON string from this object and set it using the `PlayerPrefs`.
Remember to also call `Save()` here to make sure data cannot get lost in case the game crashes without being able to call `OnApplicationQuit()`.
Run the game, and after you've clicked the capsule a couple of times with or without Shift and Control, have a look at the result. The following screenshot shows the Windows registry which is where the `PlayerPrefs` get saved.
The location when using our example project is `HKEY_CURRENT_USER\SOFTWARE\Unity\UnityEditor\MongoDB Inc.\UnityPersistenceExample` and as you can see, our JSON is right there, saved in plain text. This is also one of the big downsides to keep in mind when using `PlayerPrefs`: Data is not safe and can easily be edited when saved in plain text. Watch out for our future tutorial on encryption, which is one option to improve the safety of your data.
![
## Conclusion
In this tutorial, we have seen how to save and load data using `PlayerPrefs`. They are very simple and easy to use and a great choice for some simple data points. If it gets a bit more complex, you can save data using multiple fields or wrapping them into an object which can then be serialized using `JSON`.
What happens if you want to persist multiple objects of the same class? Or multiple classes? Maybe with relationships between them? And what if the structure of those objects changes?
As you see, `PlayerPrefs` get to their limits really fast—as easy as they are to use as limited they are.
In future tutorials, we will explore other options to persist data in Unity and how they can solve some or all of the above questions.
Please provide feedback and ask any questions in the Realm Community Forum. | md | {
"tags": [
"C#",
"Realm",
"Unity"
],
"pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.\n\nIn this tutorial series, we will explore the options given to us by Unity and third-party libraries.",
"contentType": "Tutorial"
} | Saving Data in Unity3D Using PlayerPrefs | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/introducing-sync-geospatial-data | created | # Introducing Sync for Geospatial Data
Geospatial queries have been one of the most requested features in the Atlas Device SDKs and Realm for a long time. As of today, we have added support in Kotlin, JS, and .NET with the rest to follow soon. Geospatial queries unlock a powerful set of location-based applications, and today we will look at how to leverage the power of using them with sync to make your application both simple and efficient.
The dataset used in the following examples can be downloaded to your own database by following the instructions in the geospatial queries docs.
Let’s imagine that we want to build a “restaurants near me” application where the primary use case is to provide efficient, offline-first search for restaurants within a walkable distance of the user’s current location. How should we design such an app? Let’s consider a few options:
1. We could send the user’s location-based queries to the server and have them processed there. This promises to deliver accurate results but is bottlenecked on the server’s performance and may not scale well. We would like to avoid the frustrating user experience of having to wait on a loading icon after entering a search.
2. We could load relevant/nearby data onto the user’s device and do the search locally. This promises to deliver a fast search time and will be accurate to the degree that the data cached on the user’s device is up to date for the current location. But the question is, how do we decide what data to send to the device, and how do we keep it up to date?
With flexible sync and geospatial queries, we now have the tools to build the second solution, and it is much more efficient than an app that uses a REST API to fetch data.
## Filtering by radius
A simple design will be to subscribe to all restaurant data that is within a reasonable walkable distance from the user’s current location — let’s say .5 kilometer (~0.31 miles). To enable geospatial queries to work in flexible sync, your data has to be in the right shape. For complete instructions on how to configure your app to support geospatial queries in sync, see the documentation. But basically, the location field has to be added to the list of queryable fields. The sync schema will look something like this:
. Syncing these types of shapes is in our upcoming roadmap, but until that is available, you can query the MongoDB data using the Atlas App Services API to get the BSON representation and parse that to build a GeoPolygon that Realm queries accept. Being able to filter on arbitrary shapes opens up all sorts of interesting geofencing applications, granting the app the ability to react to a change in location.
The ability to use flexible sync with geospatial queries makes it simple to design an efficient location-aware application. We are excited to see what you will use these features to create!
> **Ready to get started now?**
>
> Install one of our SDKs — start your journey with our docs or jump right into example projects with source code.
>
> Then, register for Atlas to connect to Atlas Device Sync, a fully-managed mobile backend as a service. Leverage out-of-the-box infrastructure, data synchronization capabilities, network handling, and much more to quickly launch enterprise-grade mobile apps.
>
> Finally, let us know what you think and get involved in our forums. See you there!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a44f04ba566d1ae/656fabb4c6be9315f5e0128a/image6.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt344682eee682e5dc/656fabe4d4041c844014bd01/image7.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2cab77d982c9d77/656fabfbc7fbbbe84612fff0/maps.jpg
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf426b51421040dc0/656fac2d358dcdd08dd73ca0/image5.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18e2b28f3f616b7f/656fac56841cdf44f874bb27/image3.png | md | {
"tags": [
"Realm"
],
"pageDescription": "Sync your data based on geospatial constraints using Atlas Device Sync in your applications.",
"contentType": "News & Announcements"
} | Introducing Sync for Geospatial Data | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/add-us-postal-abbreviations-atlas-search | created | # Add US Postal Abbreviations to Your Atlas Search in 5 Minutes
There are cases when it helps to have synonyms set up to work with your Atlas Search index. For example, if the search in your application needs to work with addresses, it might help to set up a list of common synonyms for postal abbreviations, so one could type in “blvd” instead of “boulevard” and still find all places with “boulevard” in the address.
This tutorial will show you how to set up your Atlas Search index to recognize US postal abbreviations.
## Prerequisites
To be successful with this tutorial, you will need:
* Python, to use a script that scrapes a list of street suffix abbreviations helpfully compiled by the United States Postal Service (USPS). This tutorial was written using Python 3.10.15, but you could try it on earlier versions of 3, if you’d like.
* A MongoDB Atlas cluster. Follow the Get Started with Atlas guide to create your account and a MongoDB cluster. For this tutorial, you can use your free-forever MongoDB Atlas cluster! Keep a note of your database username, password, and connection string as you will need those later.
* Rosetta, if you’re on a MacOS with an M1 chip. This will allow you to run MongoDB tools like mongoimport and mongosh.
* mongosh for running commands in the MongoDB shell. If you don’t already have it, install mongosh.
* A copy of mongoimport. If you have MongoDB installed on your workstation, then you may already have mongoimport installed. If not, follow the instructions on the MongoDB website to install mongoimport.
* We're going to be using a sample\_restaurants dataset in this tutorial since it contains address data. For instructions on how to load sample data, see the documentation. Also, you can see all available sample datasets.
The examples shown here were all written on a MacOS but should run on any unix-type system. If you're running on Windows, we recommend running the example commands inside the Windows Subsystem for Linux.
## A bit about synonyms in Atlas Search
To learn about synonyms in Atlas Search, we suggest you start by checking out our documentation. Synonyms allow you to index and search your collection for words that have the same or nearly the same meaning, or, in the case of our tutorial, you can search using different ways to write out an address and still get the results you expect. To set up and use synonyms in Atlas Search, you will need to:
1. Create a collection in the same database as the collection you’re indexing containing the synonyms. Note that every document in the synonyms collection must have a specific format.
2. Reference your synonyms collection in your search index definition via a synonym mapping.
3. Reference your synonym mapping in the $search command with the $text operator.
We will walk you through these steps in the tutorial, but first, let’s start with creating the JSON documents that will form our synonyms collection.
## Scrape the USPS postal abbreviations page
We will use the list of official street suffix abbreviations and a list of secondary unit designators from the USPS website to create a JSON document for each set of the synonyms.
All documents in the synonyms collection must have a specific formatthat specifies the type of synonyms—equivalent or explicit. Explicit synonyms have a one-way mapping. For example, if “boat” is explicitly mapped to “sail,” we’d be saying that if someone searches “boat,” we want to return all documents that include “sail” and “boat.” However, if we search the word “sail,” we would not get any documents that have the word “boat.” In the case of postal abbreviations, however, one can use all abbreviations interchangeably, so we will use the “equivalent” type of synonym in the mappingType field.
Here is a sample document in the synonyms collection for all the possible abbreviations of “avenue”:
```
“Avenue”:
{
"mappingType":"equivalent",
"synonyms":"AVENUE","AV","AVEN","AVENU","AVN","AVNUE","AVE"]
}
```
We wrote the web scraping code for you in Python, and you can run it with the following commands to create a document for each synonym group:
```
git clone https://github.com/mongodb-developer/Postal-Abbreviations-Synonyms-Atlas-Search-Tutorial/
cd Postal-Abbreviations-Synonyms-Atlas-Search-Tutorial
python3 main.py
```
To see details of the Python code, read the rest of the section.
In order to scrape the USPS postal website, we will need to import the following packages/libraries and install them using PIP: [requests, BeautifulSoup, and pandas. We’ll also want to import json and re for formatting our data when we’re ready:
```
import json
import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
```
Let’s start with the Street Suffix Abbreviations page. We want to create objects that represent both the URL and the page itself:
```
# Create a URL object
streetsUrl = 'https://pe.usps.com/text/pub28/28apc_002.htm'
# Create object page
headers = {
"User-Agent": 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Mobile Safari/537.36'}
streetsPage = requests.get(streetsUrl, headers=headers)
```
Next, we want to get the information on the page. We’ll start by parsing the HTML, and then get the table by its id:
```
# Obtain page's information
streetsSoup = BeautifulSoup(streetsPage.text, 'html.parser')
```
```
# Get the table by its id
streetsTable = streetsSoup.find('table', {'id': 'ep533076'})
```
Now that we have the table, we’re going to want to transform it into a dataframe, and then format it in a way that’s useful for us:
```
# Transform the table into a list of dataframes
streetsDf = pd.read_html(str(streetsTable))
```
One thing to take note of is that in the table provided on USPS’s website, one primary name is usually mapped to multiple commonly used names.
This means we need to dynamically group together commonly used names by their corresponding primary name and compile that into a list:
```
# Group together all "Commonly Used Street Suffix or Abbreviation" entries
streetsGroup = streetsDf0].groupby(0)[1].apply(list)
```
Once our names are all grouped together, we can loop through them and export them as individual JSON files.
```
for x in range(streetsGroup.size):
dictionary = {
"mappingType": "equivalent",
"synonyms": streetsGroup[x]
}
# export the JSON into a file
with open(streetsGroup.index.values[x] + ".json", "w") as outfile:
json.dump(dictionary, outfile)
```
Now, let’s do the same thing for the Secondary Unit Designators page:
Just as before, we’ll start with getting the page and transforming it to a dataframe:
```
# Create a URL object
unitsUrl = 'https://pe.usps.com/text/pub28/28apc_003.htm'
unitsPage = requests.get(unitsUrl, headers=headers)
# Obtain page's information
unitsSoup = BeautifulSoup(unitsPage.text, 'html.parser')
# Get the table by its id
unitsTable = unitsSoup.find('table', {'id': 'ep538257'})
# Transform the table into a list of dataframes
unitsDf = pd.read_html(str(unitsTable))
```
If we look at the table more closely, we can see that one of the values is blank. While it makes sense that the USPS would include this in the table, it’s not something that we want in our synonyms list.
![Table with USPS descriptions and abbreviations
To take care of that, we’ll simply remove all rows that have blank values:
```
unitsDf0] = unitsDf[0].dropna()
```
Next, we’ll take our new dataframe and turn it into a list:
```
# Create a 2D list that we will use for our synonyms
unitsList = unitsDf[0][[0, 2]].values.tolist()
```
You may have noticed that some of the values in the table have asterisks in them. Let’s quickly get rid of them so they won’t be included in our synonym mappings:
```
# Remove all non-alphanumeric characters
unitsList = [[re.sub("[^ \w]"," ",x).strip().lower() for x in y] for y in unitsList]
```
Now we can now loop through them and export them as individual JSON files just as we did before. The one thing to note is that we want to restrict the range on which we’re iterating to include only the relevant data we want:
```
# Restrict the range to only retrieve the results we want
for x in range(1, len(unitsList) - 1):
dictionary = {
"mappingType": "equivalent",
"synonyms": unitsList[x]
}
# export the JSON into a file
with open(unitsList[x][0] + ".json", "w") as outfile:
json.dump(dictionary, outfile)
```
## Create a synonyms collection with JSON schema validation
Now that we created the JSON documents for abbreviations, let’s load them all into a collection in the sample\_restaurants database. If you haven’t already created a MongoDB cluster, now is a good time to do that and load the sample data in.
The first step is to connect to your Atlas cluster. We will use mongosh to do it. If you don’t have mongosh installed, follow the [instructions.
To connect to your Atlas cluster, you will need a connection string. Choose the “Connect with the MongoDB Shell” option and follow the instructions. Note that you will need to connect with a database user that has permissions to modify the database, since we would be creating a collection in the sample\_restaurant database. The command you need to enter in the terminal will look something like:
```
mongosh "mongodb+srv://cluster0.XXXXX.mongodb.net/sample_restaurant" --apiVersion 1 --username
```
When prompted for the password, enter the database user’s password.
We created our synonym JSON documents in the right format already, but let’s make sure that if we decide to add more documents to this collection, they will also have the correct format. To do that, we will create a synonyms collection with a validator that uses $jsonSchema. The commands below will create a collection with the name “postal\_synonyms” in the sample\_restaurants database and ensure that only documents with correct format are inserted into the collection.
```
use('sample_restaurants')
db.createCollection("postal_synonyms", { validator: { $jsonSchema: { "bsonType": "object", "required": "mappingType", "synonyms"], "properties": { "mappingType": { "type": "string", "enum": ["equivalent", "explicit"], "description": "must be a either equivalent or explicit" }, "synonyms": { "bsonType": "array", "items": { "type": "string" }, "description": "must be an Array with each item a string and is required" }, "input": { "type": "array", "items": { "type": "string" }, "description": "must be an Array and is each item is a string" } }, "anyOf": [{ "not": { "properties": { "mappingType": { "enum": ["explicit"] } }, "required": ["mappingType"] } }, { "required": ["input"] }] } } })
```
## Import the JSON files into the synonyms collection
We will use mongoimport to import all the JSON files we created.
You will need a [connection string for your Atlas cluster to use in the mongoimport command. If you don’t already have mongoimport installed, use the instructions in the MongoDB documentation.
In the terminal, navigate to the folder where all the JSON files for postal abbreviation synonyms were created.
```
cat *.json | mongoimport --uri 'mongodb+srv://:@cluster0.pwh9dzy.mongodb.net/sample_restaurants?retryWrites=true&w=majority' --collection='postal_synonyms'
```
If you liked mongoimport, check out this very helpful mongoimport guide.
Take a look at the synonyms collections you just created in Atlas. You should see around 229 documents there.
## Create a search index with synonyms mapping in JSON Editor
Now that we created the synonyms collection in our sample\_restaurants database, let’s put it to use.
Let’s start by creating a search index. Navigate to the Search tab in your Atlas cluster and click the “CREATE INDEX” button.
Since the Visual Index builder doesn’t support synonym mappings yet, we will choose JSON Editor and click Next:
In the JSON Editor, pick restaurants collection in the sample\_restaurants database and enter the following into the index definition. Here, the source collection name refers to the name of the collection with all the postal abbreviation synonyms, which we named “postal\_synonyms.”
```
{
"mappings": {
"dynamic": true
},
"synonyms":
{
"analyzer": "lucene.standard",
"name": "synonym_mapping",
"source": {
"collection": "postal_synonyms"
}
}
]
}
```
![The Create Search Index JSON Editor UI in Atlas
We are indexing the restaurants collection and creating a synonym mapping with the name “synonym\_mapping” that references the synonyms collection “postal\_synonyms.”
Click on Next and then on Create Search Index, and wait for the search index to build.
Once the index is active, we’re ready to test it out.
## Test that synonyms are working (aggregation pipeline in Atlas or Compass)
Now that we have an active search index, we’re ready to test that our synonyms are working. Let’s head to the Aggregation pipeline in the Collections tab to test different calls to $search. You can also use Compass, the MongoDB GUI, if you prefer.
Choose $search from the list of pipeline stages. The UI gives us a helpful placeholder for the $search command’s arguments.
Let’s look for all restaurants that are located on a boulevard. We will search in the “address.street” field, so the arguments to the $search stage will look like this:
```
{
index: 'default',
text: {
query: 'boulevard',
path: 'address.street'
}
}
```
Let’s add a $count stage after the $search stage to see how many restaurants with an address that contains “boulevard” we found:
As expected, we found a lot of restaurants with the word “boulevard” in the address. But what if we don’t want to have users type “boulevard” in the search bar? What would happen if we put in “blvd,” for example?
```
{
index: 'default',
text: {
query: blvd,
path: 'address.street'
}
}
```
Looks like it found us restaurants with addresses that have “blvd” in them. What about the addresses with “boulevard,” though? Those did not get picked up by the search.
And what if we weren’t sure how to spell “boulevard” and just searched for “boul”? USPS’s website tells us it’s an acceptable abbreviation for boulevard, but our $search finds nothing.
This is where our synonyms come in! We need to add a synonyms option to the text operator in the $search command and reference the synonym mapping’s name:
```
{
index: 'default',
text: {
query: 'blvd',
path: 'address.street',
synonyms:'synonym_mapping'
}
}
```
And there you have it! We found all the restaurants on boulevards, regardless of which way the address was abbreviated, all thanks to our synonyms.
## Conclusion
Synonyms is just one of many features Atlas Search offers to give you all the necessary search functionality in your application. All of these features are available right now on MongoDB Atlas. We just showed you how to add support for common postal abbreviations to your Atlas Search index—what can you do with Atlas Search next? Try it now on your free-forever MongoDB Atlas cluster and head over to community forums if you have any questions! | md | {
"tags": [
"Atlas"
],
"pageDescription": "This tutorial will show you how to set up your Atlas Search index to recognize US postal abbreviations. ",
"contentType": "Tutorial"
} | Add US Postal Abbreviations to Your Atlas Search in 5 Minutes | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/atlas-flask-and-azure-app-service | created | # Scaling for Demand: Deploying Python Applications Using MongoDB Atlas on Azure App Service
Managing large amounts of data locally can prove to be a challenge, especially as the amount of saved data grows. Fortunately, there is an efficient solution available. By utilizing the features of Flask, MongoDB Atlas, and Azure App Service, you can build and host powerful web applications that are capable of storing and managing tons of data in a centralized, secure, and scalable manner. Say goodbye to unreliable local files and hello to a scalable solution.
This in-depth tutorial will teach you how to build a functional CRUD (Create, Read, Update, and Delete) Flask application that connects to a MongoDB Atlas database, and is hosted on Azure App Service. Using Azure App Service and MongoDB together can be a great way to build and host web applications. Azure App Service makes it easy to build and deploy web apps, while MongoDB is great for storing and querying large amounts of data. With this combination, you can focus on building your application and let Azure take care of the underlying infrastructure and scaling.
This tutorial is aimed at beginners, but feel free to skip through this article and focus on the aspects necessary to your project.
We are going to be making a virtual bookshelf filled with some of my favorite books. Within this bookshelf, we will have the power to add a new book, view all the books in our bookshelf, exchange a book for another one of my favorites, or remove a book we would like to read. At the end of our tutorial, our bookshelf will be hosted so anyone with our website link can enjoy our book list too.
### Requirements
Before we begin, there are a handful of prerequisites we need:
* MongoDB Atlas account.
* Microsoft Azure App Services subscription.
* Postman Desktop (or another way to test our functions).
* Python 3.9+.
* GitHub Repository.
### Setting up a MongoDB Atlas cluster
Within MongoDB Atlas, we need to create a free cluster. Follow the instructions in our MongoDB Atlas Tutorial. Once your cluster has provisioned, create a database and collection within Atlas. Let’s name our database “bookshelf” and our collection “books.” Click on “Insert Document” and add in a book so that we have some data to start with. Your setup should look like this:
Now that we have our bookshelf set up, we are ready to connect to it and utilize our CRUD operations. Before we get started, let’s focus on *how* to properly connect.
## Cluster security access
Now that we have our cluster provisioned and ready to use, we need to make sure we have proper database access. Through Atlas, we can do this by heading to the “Security” section on the left-hand side of the screen. Ensure that under “Database Access,” you have enabled a user with at least “Read and Write'' access. Under “Network Access,” ensure you’ve added in any and all IP addresses that you’re planning on accessing your database from. An easy way to do this is to set your IP address access to “0.0.0.0/0.” This allows you to access your cluster from any IP address. Atlas provides additional optional security features through Network Peering and Private Connections, using all the major cloud providers. Azure Private Link is part of this additional security feature, or if you’ve provisioned an M10 or above cluster, the use of Azure Virtual Private Connection.
## Setting up a Python virtual environment
Before we open up our Visual Studio Code, use your terminal to create a directory for where your bookshelf project will live.
Once we have our directory made, open it up in VSCode and access the terminal inside of VSCode. We are going to set up our Python virtual environment. We do this so all our files have a fun spot to live, where nothing else already downloaded can bother them.
Set up your environment with:
```
python3 -m venv venv
```
Activate your environment with:
```
source venv/bin/activate
```
You’ll know you’re in your virtual environment when you see the little (venv) at the beginning of your hostname in your command line.
Once we are in our virtual environment, we are ready to set up our project requirements. A ‘requirements.txt’ file is used to specify the dependencies (various packages and their versions) required by the project to run. It helps ensure the correct versions of packages are installed when deploying the project to a new environment. This makes it much easier to reproduce the development environment and prevents any compatibility issues that may arise when using different versions of dependencies.
## Setting up our ‘requirements.txt’ file
Our ‘requirements.txt’ file will consist of four various dependencies this project requires. The first is Flask. Flask is a web micro-framework for Python. It provides the basic tools for building web apps, such as routing and request handling. Flask allows for easy integration with other libraries and frameworks and allows for flexibility and customizability. If you’ve never worked with Flask before, do not worry. By the end of this tutorial, you will have a clear understanding of how useful Flask can be.
The second dependency we have is PyMongo. PyMongo is a Python library for working with MongoDB. It provides a convenient way to interact with MongoDB databases and collections. We will be using it to connect to our database.
The third dependency we have is Python-dotenv. This is a tool used to store and access important information, like passwords and secret keys, in a safe and secure manner. Instead of hard-coding this information, Python-dotenv allows us to keep this information in an environment variable in a separate file that isn’t shared with anyone else. Later in this tutorial, we will go into more detail on how to properly set up environment variables in our project.
The last dependency we have in our file is Black. Black is a code formatter for Python and it enforces a consistent coding style, making it easier for developers to read and maintain the code. By using a common code style, it can improve readability and maintainability.
Include these four dependencies in your ‘requirements.txt’ file.
```
Flask==2.2.2
pymongo==4.3.3
python-dotenv==0.21.1
black==22.12.0
```
This way, we can install all our dependencies in one step:
```
pip install -r requirements.txt
```
***Troubleshooting***: After successfully installing PyMongo, a line in your terminal saying `dnspython has been installed` will likely pop up. It is worth noting that without `dnspython` properly downloaded, our next package `dotenv` won’t work. If, when attempting to run our script later, you are getting `ModuleNotFoundError: No module named dotenv`, include `dnspython==2.2.1` in your ‘requirements.txt’ file and rerun the command from above.
## Setting up our ‘app.py’ file
Our ‘app.py’ file is the main file where our code for our bookshelf project will live. Create a new file within our “azuredemo” directory and name it ‘app.py’. It is time for us to include our imports:
```
import bson
import os
from dotenv import load_dotenv
from flask import Flask, render_template, request
from pymongo import MongoClient
from pymongo.collection import Collection
from pymongo.database import Database
```
Here we have our environment variable imports, our Flask imports, our PyMongo imports, and the BSON import we need in order to work with binary JSON data.
Once we have our imports set up, we are ready to connect to our MongoDB Atlas cluster and implement our CRUD functions, but first let’s test and make sure Flask is properly installed.
Run this very simple Flask app:
```
app: Flask = Flask(__name__)
# our initial form page
@app.route(‘/’)
def index():
return “Hi!”
```
Here, we continue on to creating a new Flask application object, which we named “app” and give it the name of our current file. We then create a new route for the application. This tells the server which URL to listen for and which function to run when that URL is requested. In this specific example, the route is the homepage, and the function that runs returns the string “Hi!”.
Run your flask app using:
```
flask run
```
This opens up port 5000, which is Flask’s default port, but you can always switch the port you’re using by running the command:
```
flask run -p port number]
```
When we access [http://127.0.0.1:5000, we see:
So, our incredibly simple Flask app works! Amazing. Let’s now connect it to our database.
## Connecting our Flask app to MongoDB
As mentioned above, we are going to be using a database environment variable to connect our database. In order to do this, we need to set up an .env file. Add this file in the same directory we’ve been working with and include your MongoDB connection string. Your connection string is a URL-like string that is used to connect to a MongoDB server. It includes all the necessary details to connect to your specific cluster. This is how your setup should look:
Change out the `username` and `password` for your own. Make sure you have set the proper Network Access points from the paragraph above.
We want to use environment variables so we can keep them separate from our code. This way, there is privacy since the `CONNECTION_STRING` contains sensitive information. It is crucial for security and maintainability purposes.
Once you have your imports in, we need to add a couple lines of code above our Flask instantiation so we can connect to our .env file holding our `CONNECTION_STRING`, and connect to our Atlas database. At this point, your app.py should look like this:
```
import bson
import os
from dotenv import load_dotenv
from flask import Flask, render_template, request
from pymongo import MongoClient
from pymongo.collection import Collection
from pymongo.database import Database
# access your MongoDB Atlas cluster
load_dotenv()
connection_string: str = os.environ.get(“CONNECTION_STRING”)
mongo_client: MongoClient = MongoClient(connection_string)
# add in your database and collection from Atlas
database: Database = mongo_client.get_database(“bookshelf”)
collection: Collection = database.get_collection(“books”)
# instantiating new object with “name”
app: Flask = Flask(__name__)
# our initial form page
@app.route(‘/’)
def index():
return “Hi!”
```
Let’s test `app.py` and ensure our connection to our cluster is properly in place.
Add in these two lines after your `collection = database“books”]` line and before your `#instantiating new object with name` line to check and make sure your Flask application is really connected to your database:
```
book = {“title”: “The Great Gatsby”, “author”: “F. Scott Fitzgerald”, “year”: 1925}
collection.insert_one(book)
```
Run your application, access Atlas, and you should see the additional copy of “The Great Gatsby” added.
![screenshot of our “books” collection showing both copies of “The Great Gatsby”
Amazing! We have successfully connected our Flask application to MongoDB. Let’s start setting up our CRUD (Create, Read, Update, Delete) functions.
Feel free to delete those two added lines of code and manually remove both the Gatsby documents from Atlas. This was for testing purposes!
## Creating CRUD functions
Right now, we have hard-coded in our “Hi!” on the screen. Instead, it’s easier to render a template for our homepage. To do this, create a new folder called “templates” in your directory. Inside of this folder, create a file called: `index.html`. Here is where all the HTML and CSS for our homepage will go. This is highly customizable and not the focus of the tutorial, so please access this code from my Github (or make your own!).
Once our `index.html` file is complete, let’s link it to our `app.py` file so we can read everything correctly. This is where the addition of the `render_template` import comes in. Link your `index.html` file in your initial form page function like so:
```
# our initial form page
@app.route(‘/’)
def index():
return render_template(“index.html”)
```
When you run it, this should be your new homepage when accessing http://127.0.0.1:5000/:
We are ready to move on to our CRUD functions.
#### Create and read functions
We are combining our two Create and Read functions. This will allow us to add in a new book to our bookshelf, and be able to see all the books we have in our bookshelf depending on which request method we choose.
```
# CREATE and READ
@app.route('/books', methods="GET", "POST"])
def books():
if request.method == 'POST':
# CREATE
book: str = request.json['book']
pages: str = request.json['pages']
# insert new book into books collection in MongoDB
collection.insert_one({"book": book, "pages": pages})
return f"CREATE: Your book {book} ({pages} pages) has been added to your bookshelf.\n "
elif request.method == 'GET':
# READ
bookshelf = list(collection.find())
novels = []
for titles in bookshelf:
book = titles['book']
pages = titles['pages']
shelf = {'book': book, 'pages': pages}
novels.insert(0,shelf)
return novels
```
This function is connected to our ‘/books’ route and depending on which request method we send, we can either add in a new book, or see all the books we have already in our database. We are not going to be validating any of the data in this example because it is out of scope, but please use Postman, cURL, or a similar tool to verify the function is properly implemented. For this function, I inserted:
```
{
“book”: “The Odyssey”,
“pages”: 384
}
```
If we head over to our Atlas portal, refresh, and check on our “bookshelf” database and “books” collection, this is what we will see:
![screenshot of our “books” collection showing “The Odyssey”
Let’s insert one more book of our choosing just to add some more data to our database. I’m going to add in “*The Perks of Being a Wallflower*.”
Amazing! Read the database collection back and you should see both novels.
Let’s move onto our UPDATE function.
#### Update
For this function, we want to exchange a current book in our bookshelf with a different book.
```
# UPDATE
@app.route("/books/", methods = 'PUT'])
def update_book(book_id: str):
new_book: str = request.json['book']
new_pages: str = request.json['pages']
collection.update_one({"_id": bson.ObjectId(book_id)}, {"$set": {"book": new_book, "pages": new_pages}})
return f"UPDATE: Your book has been updated to: {new_book} ({new_pages} pages).\n"
```
This function allows us to exchange a book we currently have in our database with a new book. The exchange takes place via the book ID. To do so, access Atlas and copy in the ID you want to use and include this at the end of the URL. For this, I want to switch “The Odyssey” with “The Stranger”. Please use your testing tool to communicate to the update endpoint and view the results in Atlas.
Once you hit send and refresh your Atlas database, you’ll see:
![screenshot of our “books” collection with “The Stranger” and “The Perks of Being a Wallflower”
“The Odyssey” has been exchanged with “The Stranger”!
Now, let’s move onto our last function: the DELETE function.
#### Delete
```
# DELETE
@app.route("/books/", methods = 'DELETE'])
def remove_book(book_id: str):
collection.delete_one({"_id": bson.ObjectId(book_id)})
return f"DELETE: Your book (id = {book_id}) has been removed from your bookshelf.\n"
```
This function allows us to remove a specific book from our bookshelf. Similarly to the UPDATE function, we need to specify which book we want to delete through the URL route using the novels ID. Let’s remove our latest book from the bookshelf to read, “The Stranger”.
Communicate with the delete endpoint and execute the function.
In Atlas our results are shown:
![screenshot of the “books” collection showing “The Perks of Being a Wallflower”
“The Stranger” has been removed!
Congratulations, you have successfully created a Flask application that can utilize CRUD functionalities, while using MongoDB Atlas as your database. That’s huge. But…no one else can use your bookshelf! It’s only hosted locally. Microsoft Azure App Service can help us out with this. Let’s host our Flask app on App Service.
## Host your application on Microsoft Azure App Service
We are using Visual Studio Code for this demo, so make sure you have installed the Azure extension and you have signed into your subscription. There are other ways to work with Azure App Service, and to use Visual Studio Code is a personal preference.
If you’re properly logged in, you’ll see your Azure subscription on the left-hand side.
Click the (+) sign next to Resources:
Click on “Create App Service Web App”:
Enter a new name. This will serve as your website URL, so make sure it’s not too hectic:
Select your runtime stack. Mine is Python 3.9:
Select your pricing tier. The free tier will work for this demo.
In the Azure Activity Log, you will see the web app being created.
You will be asked to deploy your web app, and then choose the folder you want to deploy:
It will start deploying, as you’ll see through the “Output Window” in the Azure App Service Log.
Once it’s done, you’ll see a button that says “Browse Website.” Click on it.
As you can see, our application is now hosted at a different location! It now has its own URL.
Let’s make sure we can still utilize our CRUD operations with our new URL. Test again for each function.
At each step, if we refresh our MongoDB Atlas database, we will see these changes take place there as well. Great job!
## Conclusion
Congratulations! We have successfully created a Flask application from scratch, connected it to our MongoDB database, and hosted it on Azure App Service. These skills will continue to come in handy and I hope you enjoyed the journey. Separately, Azure App Service and MongoDB host a variety of benefits. Together, they are unstoppable! Combined, they provide a powerful platform for building and scaling web applications that can handle large amounts of data. Azure App Service makes it easy to deploy and scale web apps, while MongoDB provides a flexible and scalable data storage solution.
Get information on MongoDB Atlas, Azure App Service, and Flask.
If you liked this tutorial and would like to dive even further into MongoDB Atlas and the functionalities available, please view my YouTube video.
| md | {
"tags": [
"Python",
"MongoDB",
"Azure"
],
"pageDescription": "This tutorial will show you how to create a functional Flask application that connects to a MongoDB Atlas database and is hosted on Azure App Service.",
"contentType": "Tutorial"
} | Scaling for Demand: Deploying Python Applications Using MongoDB Atlas on Azure App Service | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-single-collection-springpart2 | created | # Single-Collection Designs in MongoDB with Spring Data (Part 2)
In Part 1 of this two-part series, we discussed single-collection design patterns in MongoDB and how they can be used to avoid the need for computationally expensive joins across collections. In this second part of the series, we will provide examples of how the single-collection pattern can be utilized in Java applications using Spring Data MongoDB and, in particular, how documents representing different classes but residing in the same collection can be accessed.
## Accessing polymorphic single collection data using Spring Data MongoDB
Whilst official, native idiomatic interfaces for MongoDB are available for 12 different programming languages, with community-provided interfaces available for many more, many of our customers have significant existing investment and knowledge developing Java applications using Spring Data. A common question we are asked is how can polymorphic single-collection documents be accessed using Spring Data MongoDB?
In the next few steps, I will show you how the Spring Data template model can be used to map airline, aircraft, and ADSB position report documents in a single collection named **aerodata**, to corresponding POJOs in a Spring application.
The code examples that follow were created using the Netbeans IDE, but any IDE supporting Java IDE, including Eclipse and IntelliJ IDEA, can be used.
To get started, visit the Spring Initializr website and create a new Spring Boot project, adding Spring Data MongoDB as a dependency. In my example, I’m using Gradle, but you can use Maven, if you prefer.
Generate your template project, unpack it, and open it in your IDE:
Add a package to your project to store the POJO, repository class, and interface definitions. (In my project, I created a package called (**com.mongodb.devrel.gcr.aerodata**). For our demo, we will add four POJOs — **AeroData**, **Airline**, **Aircraft**, and **ADSBRecord** — to represent our data, with four corresponding repository interface definitions. **AeroData** will be an abstract base class from which the other POJOs will extend:
```
package com.mongodb.devrel.gcr.aerodata;
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
@Document(collection = "aeroData")
public abstract class AeroData {
@Id
public String id;
public Integer recordType;
//Getters and Setters...
}
```
``` java
package com.mongodb.devrel.gcr.aerodata;
import org.springframework.data.mongodb.repository.MongoRepository;
public interface AeroDataRepository extends MongoRepository{
}
```
``` java
package com.mongodb.devrel.gcr.aerodata;
import org.springframework.data.annotation.TypeAlias;
import org.springframework.data.mongodb.core.mapping.Document;
@Document(collection = "aeroData")
@TypeAlias("AirlineData")
public class Airline extends AeroData{
public String airlineName;
public String country;
public String countryISO;
public String callsign;
public String website;
public Airline(String id, String airlineName, String country, String countryISO, String callsign, String website) {
this.id = id;
this.airlineName = airlineName;
this.country = country;
this.countryISO = countryISO;
this.callsign = callsign;
this.website = website;
}
@Override
public String toString() {
return String.format(
"Airlineid=%s, name='%s', country='%s (%s)', callsign='%s', website='%s']",
id, airlineName, country, countryISO, callsign, website);
}
}
```
``` java
package com.mongodb.devrel.gcr.aerodata;
import org.springframework.data.mongodb.repository.MongoRepository;
public interface AirlineRepository extends MongoRepository{
}
```
``` java
package com.mongodb.devrel.gcr.aerodata;
import org.springframework.data.annotation.TypeAlias;
import org.springframework.data.mongodb.core.mapping.Document;
@Document(collection = "aeroData")
@TypeAlias("AircraftData")
public class Aircraft extends AeroData{
public String tailNumber;
public String manufacturer;
public String model;
public Aircraft(String id, String tailNumber, String manufacturer, String model) {
this.id = id;
this.tailNumber = tailNumber;
this.manufacturer = manufacturer;
this.model = model;
}
@Override
public String toString() {
return String.format(
"Aircraft[id=%s, tailNumber='%s', manufacturer='%s', model='%s']",
id, tailNumber, manufacturer, model);
}
}
```
``` java
package com.mongodb.devrel.gcr.aerodata;
import org.springframework.data.mongodb.repository.MongoRepository;
public interface AircraftRepository extends MongoRepository{
}
```
``` java
package com.mongodb.devrel.gcr.aerodata;
import java.util.Date;
import org.springframework.data.annotation.TypeAlias;
import org.springframework.data.mongodb.core.mapping.Document;
@Document(collection = "aeroData")
@TypeAlias("ADSBRecord")
public class ADSBRecord extends AeroData {
public Integer altitude;
public Integer heading;
public Integer speed;
public Integer verticalSpeed;
public Date timestamp;
public GeoPoint geoPoint;
public ADSBRecord(String id, Integer altitude, Integer heading, Integer speed, Integer verticalSpeed, Date timestamp, GeoPoint geoPoint) {
this.id = id;
this.altitude = altitude;
this.heading = heading;
this.speed = speed;
this.verticalSpeed = verticalSpeed;
this.timestamp = timestamp;
this.geoPoint = geoPoint;
}
@Override
public String toString() {
return String.format(
"ADSB[id=%s, altitude='%d', heading='%d', speed='%d', verticalSpeed='%d' timestamp='%tc', latitude='%f', longitude='%f']",
id, altitude, heading, speed, verticalSpeed, timestamp, geoPoint == null ? null : geoPoint.coordinates[1], geoPoint == null ? null : geoPoint.coordinates[0]);
}
}
```
``` java
package com.mongodb.devrel.gcr.aerodata;
import org.springframework.data.mongodb.repository.MongoRepository;
public interface ADSBRecordRepository extends MongoRepository{
}
```
We’ll also add a **GeoPoint** class to hold location information within the **ADSBRecord** objects:
``` java
package com.mongodb.devrel.gcr.aerodata;
public class GeoPoint {
public String type;
public Double[] coordinates;
//Getters and Setters...
}
```
Note the annotations used in the four main POJO classes. We’ve used the “**@Document**” annotation to specify the MongoDB collection into which data for each class should be saved. In each case, we’ve specified the “**aeroData**” collection. In the **Airline**, **Aircraft**, and **ADSBRecord** classes, we’ve also used the “**@TypeAlias**” annotation. Spring Data will automatically add a “**\_class**” field to each of our documents containing the Java class name of the originating object. The **TypeAlias** annotation allows us to override the value saved in this field and can be useful early in a project’s development if it’s suspected the class type may change. Finally, in the **AeroData** abstract class, we’ve used the “@id” annotation to specify the field Spring Data will use in the MongoDB \_id field of our documents.
Let’s go ahead and update our project to add and retrieve some data. Start by adding your MongoDB connection URI to application.properties. (A free MongoDB Atlas cluster can be created if you need one by signing up at [cloud.mongodb.com.)
```
spring.data.mongodb.uri=mongodb://myusername:mypassword@abc-c0-shard-00-00.ftspj.mongodb.net:27017,abc-c0-shard-00-01.ftspj.mongodb.net:27017,abc-c0-shard-00-02.ftspj.mongodb.net:27017/air_tracker?ssl=true&replicaSet=atlas-k9999h-shard-0&authSource=admin&retryWrites=true&w=majority
```
Note that having unencrypted user credentials in a properties file is obviously not best practice from a security standpoint and this approach should only be used for testing and educational purposes. For more details on options for connecting to MongoDB securely, including the use of keystores and cloud identity mechanisms, refer to the MongoDB documentation.
With our connection details in place, we can now update the main application entry class. Because we are not using a view or controller, we’ll set the application up as a **CommandLineRunner** to view output on the command line:
```java
package com.mongodb.devrel.gcr.aerodata;
import java.util.Date;
import java.util.Optional;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class AerodataApplication implements CommandLineRunner {
@Autowired
private AirlineRepository airlineRepo;
@Autowired
private AircraftRepository aircraftRepo;
@Autowired
private ADSBRecordRepository adsbRepo;
public static void main(String] args) {
SpringApplication.run(AerodataApplication.class, args);
}
@Override
public void run(String... args) throws Exception {
// save an airline
airlineRepo.save(new Airline("DAL", "Delta Air Lines", "United States", "US", "DELTA", "delta.com"));
// add some aircraft aircraft
aircraftRepo.save(new Aircraft("DAL_a93d7c", "N695CA", "Bombardier Inc", "CL-600-2D24"));
aircraftRepo.save(new Aircraft("DAL_ab8379", "N8409N", "Bombardier Inc", "CL-600-2B19"));
aircraftRepo.save(new Aircraft("DAL_a36f7e", "N8409N", "Airbus Industrie", "A319-114"));
//Add some ADSB position reports
Double[] coords1 = {55.991776, -4.776722};
GeoPoint geoPoint = new GeoPoint(coords1);
adsbRepo.save(new ADSBRecord("DAL_a36f7e_1", 38825, 319, 428, 1024, new Date(1656980617041l), geoPoint));
Double[] coords2 = {55.994843, -4.781466};
geoPoint = new GeoPoint(coords2);
adsbRepo.save(new ADSBRecord("DAL_a36f7e_2", 38875, 319, 429, 1024, new Date(1656980618041l), geoPoint));
Double[] coords3 = {55.99606, -4.783344};
geoPoint = new GeoPoint(coords3);
adsbRepo.save(new ADSBRecord("DAL_a36f7e_3", 38892, 319, 428, 1024, new Date(1656980619041l), geoPoint));
// fetch all airlines
System.out.println("Airlines found with findAll():");
System.out.println("-------------------------------");
for (Airline airline : airlineRepo.findAll()) {
System.out.println(airline);
}
// fetch a specific airline by ICAO ID
System.out.println("Airline found with findById():");
System.out.println("-------------------------------");
Optional airlineResponse = airlineRepo.findById("DAL");
System.out.println(airlineResponse.get());
System.out.println();
// fetch all aircraft
System.out.println("Aircraft found with findAll():");
System.out.println("-------------------------------");
for (Aircraft aircraft : aircraftRepo.findAll()) {
System.out.println(aircraft);
}
// fetch a specific aircraft by ICAO ID
System.out.println("Aircraft found with findById():");
System.out.println("-------------------------------");
Optional aircraftResponse = aircraftRepo.findById("DAL_a36f7e");
System.out.println(aircraftResponse.get());
System.out.println();
// fetch all adsb records
System.out.println("ADSB records found with findAll():");
System.out.println("-------------------------------");
for (ADSBRecord adsb : adsbRepo.findAll()) {
System.out.println(adsb);
}
// fetch a specific ADSB Record by ID
System.out.println("ADSB Record found with findById():");
System.out.println("-------------------------------");
Optional adsbResponse = adsbRepo.findById("DAL_a36f7e_1");
System.out.println(adsbResponse.get());
System.out.println();
}
}
```
Spring Boot takes care of a lot of details in the background for us, including establishing a connection to MongoDB and autowiring our repository classes. On running the application, we are:
1. Using the save method on the **Airline**, **Aircraft**, and **ADSBRecord** repositories respectively to add an airline, three aircraft, and three ADSB position report documents to our collection.
2. Using the findAll and findById methods on the **Airline**, **Aircraft**, and **ADSBRecord** repositories respectively to retrieve, in turn, all airline documents, a specific airline document, all aircraft documents, a specific aircraft document, all ADSB position report documents, and a specific ADSB position report document.
If everything is configured correctly, we should see the following output on the command line:
```bash
Airlines found with findAll():
-------------------------------
Airline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']
Airline[id=DAL_a93d7c, name='null', country='null (null)', callsign='null', website='null']
Airline[id=DAL_ab8379, name='null', country='null (null)', callsign='null', website='null']
Airline[id=DAL_a36f7e, name='null', country='null (null)', callsign='null', website='null']
Airline[id=DAL_a36f7e_1, name='null', country='null (null)', callsign='null', website='null']
Airline[id=DAL_a36f7e_2, name='null', country='null (null)', callsign='null', website='null']
Airline[id=DAL_a36f7e_3, name='null', country='null (null)', callsign='null', website='null']
Airline found with findById():
-------------------------------
Airline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']
Aircraft found with findAll():
-------------------------------
Aircraft[id=DAL, tailNumber='null', manufacturer='null', model='null']
Aircraft[id=DAL_a93d7c, tailNumber='N695CA', manufacturer='Bombardier Inc', model='CL-600-2D24']
Aircraft[id=DAL_ab8379, tailNumber='N8409N', manufacturer='Bombardier Inc', model='CL-600-2B19']
Aircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']
Aircraft[id=DAL_a36f7e_1, tailNumber='null', manufacturer='null', model='null']
Aircraft[id=DAL_a36f7e_2, tailNumber='null', manufacturer='null', model='null']
Aircraft[id=DAL_a36f7e_3, tailNumber='null', manufacturer='null', model='null']
Aircraft found with findById():
-------------------------------
Aircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']
ADSB records found with findAll():
-------------------------------
ADSB[id=DAL, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']
ADSB[id=DAL_a93d7c, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']
ADSB[id=DAL_ab8379, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']
ADSB[id=DAL_a36f7e, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']
ADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']
ADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']
ADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']
ADSB Record found with findById():
-------------------------------
ADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='-4.776722', longitude='55.991776']
```
As you can see, our data has been successfully added to the MongoDB collection, and we are able to retrieve the data. However, there is a problem. The findAll methods of each of the repository objects are returning a result for every document in our collection, not just the documents of the class type associated with each repository. As a result, we are seeing seven documents being returned for each record type — airline, aircraft, and ADSB — when we would expect to see only one airline, three aircraft, and three ADSB position reports. Note this issue is common across all the “All” repository methods — findAll, deleteAll, and notifyAll. A call to the deleteAll method on the airline repository would result in all documents in the collection being deleted, not just airline documents.
To address this, we have two options: We could override the standard Spring Boot repository findAll (and deleteAll/notifyAll) methods to factor in the class associated with each calling repository class, or we could extend the repository interface definitions to include methods to specifically retrieve only documents of the corresponding class. In our exercise, we’ll concentrate on the later approach by updating our repository interface definitions:
```java
package com.mongodb.devrel.gcr.aerodata;
import java.util.List;
import java.util.Optional;
import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.mongodb.repository.Query;
public interface AirlineRepository extends MongoRepository{
@Query("{_class: \"AirlineData\"}")
List findAllAirlines();
@Query(value="{_id: /^?0/, _class: \"AirlineData\"}", sort = "{_id: 1}")
Optional findAirlineByIcaoAddr(String icaoAddr);
}
```
```java
package com.mongodb.devrel.gcr.aerodata;
import java.util.List;
import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.mongodb.repository.Query;
public interface AircraftRepository extends MongoRepository{
@Query("{_class: \"AircraftData\"}")
List findAllAircraft();
@Query("{_id: /^?0/, _class: \"AircraftData\"}")
List findAircraftDataByIcaoAddr(String icaoAddr);
}
```
```java
package com.mongodb.devrel.gcr.aerodata;
import java.util.List;
import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.mongodb.repository.Query;
public interface ADSBRecordRepository extends MongoRepository{
@Query(value="{_class: \"ADSBRecord\"}",sort="{_id: 1}")
List findAllADSBRecords();
@Query(value="{_id: /^?0/, _class: \"ADSBRecord\"}", sort = "{_id: 1}")
List findADSBDataByIcaoAddr(String icaoAddr);
}
```
In each interface, we’ve added two new function definitions — one to return all documents of the relevant type, and one to allow documents to be returned when searching by ICAO address. Using the @Query annotation, we are able to format the queries as needed.
With our function definitions in place, we can now update the main application class:
```java
package com.mongodb.devrel.gcr.aerodata;
import java.util.Date;
import java.util.Optional;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class AerodataApplication implements CommandLineRunner {
@Autowired
private AirlineRepository airlineRepo;
@Autowired
private AircraftRepository aircraftRepo;
@Autowired
private ADSBRecordRepository adsbRepo;
public static void main(String[] args) {
SpringApplication.run(AerodataApplication.class, args);
}
@Override
public void run(String... args) throws Exception {
//Delete any records from a previous run;
airlineRepo.deleteAll();
// save an airline
airlineRepo.save(new Airline("DAL", "Delta Air Lines", "United States", "US", "DELTA", "delta.com"));
// add some aircraft aircraft
aircraftRepo.save(new Aircraft("DAL_a93d7c", "N695CA", "Bombardier Inc", "CL-600-2D24"));
aircraftRepo.save(new Aircraft("DAL_ab8379", "N8409N", "Bombardier Inc", "CL-600-2B19"));
aircraftRepo.save(new Aircraft("DAL_a36f7e", "N8409N", "Airbus Industrie", "A319-114"));
//Add some ADSB position reports
Double[] coords1 = {-4.776722, 55.991776};
GeoPoint geoPoint = new GeoPoint(coords1);
adsbRepo.save(new ADSBRecord("DAL_a36f7e_1", 38825, 319, 428, 1024, new Date(1656980617041l), geoPoint));
Double[] coords2 = {-4.781466, 55.994843};
geoPoint = new GeoPoint(coords2);
adsbRepo.save(new ADSBRecord("DAL_a36f7e_2", 38875, 319, 429, 1024, new Date(1656980618041l), geoPoint));
Double[] coords3 = {-4.783344, 55.99606};
geoPoint = new GeoPoint(coords3);
adsbRepo.save(new ADSBRecord("DAL_a36f7e_3", 38892, 319, 428, 1024, new Date(1656980619041l), geoPoint));
// fetch all airlines
System.out.println("Airlines found with findAllAirlines():");
System.out.println("-------------------------------");
for (Airline airline : airlineRepo.findAllAirlines()) {
System.out.println(airline);
}
System.out.println();
// fetch a specific airline by ICAO ID
System.out.println("Airlines found with findAirlineByIcaoAddr(\"DAL\"):");
System.out.println("-------------------------------");
Optional airlineResponse = airlineRepo.findAirlineByIcaoAddr("DAL");
System.out.println(airlineResponse.get());
System.out.println();
// fetch all aircraft
System.out.println("Aircraft found with findAllAircraft():");
System.out.println("-------------------------------");
for (Aircraft aircraft : aircraftRepo.findAllAircraft()) {
System.out.println(aircraft);
}
System.out.println();
// fetch Aircraft Documents specific to airline "DAL"
System.out.println("Aircraft found with findAircraftDataByIcaoAddr(\"DAL\"):");
System.out.println("-------------------------------");
for (Aircraft aircraft : aircraftRepo.findAircraftDataByIcaoAddr("DAL")) {
System.out.println(aircraft);
}
System.out.println();
// fetch Aircraft Documents specific to aircraft "a36f7e"
System.out.println("Aircraft found with findAircraftDataByIcaoAddr(\"DAL_a36f7e\"):");
System.out.println("-------------------------------");
for (Aircraft aircraft : aircraftRepo.findAircraftDataByIcaoAddr("DAL_a36f7e")) {
System.out.println(aircraft);
}
System.out.println();
// fetch all adsb records
System.out.println("ADSB records found with findAllADSBRecords():");
System.out.println("-------------------------------");
for (ADSBRecord adsb : adsbRepo.findAllADSBRecords()) {
System.out.println(adsb);
}
System.out.println();
// fetch ADSB Documents specific to airline "DAL"
System.out.println("ADSB Documents found with findADSBDataByIcaoAddr(\"DAL\"):");
System.out.println("-------------------------------");
for (ADSBRecord adsb : adsbRepo.findADSBDataByIcaoAddr("DAL")) {
System.out.println(adsb);
}
System.out.println();
// fetch ADSB Documents specific to aircraft "a36f7e"
System.out.println("ADSB Documents found with findADSBDataByIcaoAddr(\"DAL_a36f7e\"):");
System.out.println("-------------------------------");
for (ADSBRecord adsb : adsbRepo.findADSBDataByIcaoAddr("DAL_a36f7e")) {
System.out.println(adsb);
}
}
}
```
Note that as well as the revised search calls, we also added a call to deleteAll on the airline repository to remove data added by prior runs of the application.
With the updates in place, when we run the application, we should now see the expected output:
```bash
Airlines found with findAllAirlines():
-------------------------------
Airline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']
Airlines found with findAirlineByIcaoAddr("DAL"):
-------------------------------
Airline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']
Aircraft found with findAllAircraft():
-------------------------------
Aircraft[id=DAL_a93d7c, tailNumber='N695CA', manufacturer='Bombardier Inc', model='CL-600-2D24']
Aircraft[id=DAL_ab8379, tailNumber='N8409N', manufacturer='Bombardier Inc', model='CL-600-2B19']
Aircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']
Aircraft found with findAircraftDataByIcaoAddr("DAL"):
-------------------------------
Aircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']
Aircraft[id=DAL_a93d7c, tailNumber='N695CA', manufacturer='Bombardier Inc', model='CL-600-2D24']
Aircraft[id=DAL_ab8379, tailNumber='N8409N', manufacturer='Bombardier Inc', model='CL-600-2B19']
Aircraft found with findAircraftDataByIcaoAddr("DAL_a36f7e"):
-------------------------------
Aircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']
ADSB records found with findAllADSBRecords():
-------------------------------
ADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']
ADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']
ADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']
ADSB Documents found with findADSBDataByIcaoAddr("DAL"):
-------------------------------
ADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']
ADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']
ADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']
ADSB Documents found with findADSBDataByIcaoAddr("DAL_a36f7e"):
-------------------------------
ADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']
ADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']
ADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']
```
In this two-part post, we have seen how polymorphic single-collection designs in MongoDB can provide all the query flexibility of normalized relational designs, whilst simultaneously avoiding anti-patterns such as unbounded arrays and unnecessary joins. This makes the resulting collections highly performant from a search standpoint and amenable to horizontal scaling. We have also shown how we can work with these designs using Spring Data MongoDB.
The example source code used in this series is [available in Github. | md | {
"tags": [
"Java",
"MongoDB",
"Spring"
],
"pageDescription": "In the second part of the series, we will provide examples of how the single-collection pattern can be utilized in Java applications using Spring Data MongoDB.",
"contentType": "Tutorial"
} | Single-Collection Designs in MongoDB with Spring Data (Part 2) | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/easy-migration-relational-database-mongodb-relational-migrator | created | # Easy Migration: From Relational Database to MongoDB with MongoDB Relational Migrator
Defining the process of data migration from a relational database to MongoDB has always been a complex task. Some have opted for a custom approach, adopting custom solutions such as scripts, whereas others have preferred to use third-party tools.
It is in this context that the Relational Migrator enters the picture, melting the complexity of this transition from a relational database to MongoDB as naturally as the sun melts the snow.
## How Relational Migrator comes to our help
In the context of a relational database to MongoDB migration project, several questions arise — for example:
- What tool should you use to best perform this migration?
- How can this migration process be made time-optimal for a medium/large size database?
- How will the data need to be modeled on MongoDB?
- How much time/resources will it take to restructure SQL queries to MQL?
Consider the following architecture, as an example:
:
Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash."
This tool effectively achieves the goal by dynamically performing the following operations:
- Ingestion of data from the source — PostgreSQL
- Data transformation — Logstash
- Distribution of the transformed data to the destination — MongoDB
Great! So, it will be possible to migrate data and benefit from high flexibility in its transformation, and we also assume relatively short time frames because we've done some very good tuning, but different pipelines will have to be defined manually.
Let us concretize with an example what we have been telling ourselves so far by considering the following scheme:
!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb8fd2b6686f0d533/65947c900543c5aba18f25c0/image4.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb675990ee35856de/65947cc42f46f772668248fa/image9.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5a1c48d93e93ae2a/65947d0f13cde9e855200a1f/image1.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta4c41c7a110ed3fa/65947d3ca8ee437ac719978a/image6.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd5909d7af57817b6/65947d7a1f8952fb3f91391a/image5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5be52451fdb9c083/65947db3254effecce746c7a/image7.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte510e7b6a9b0a51c/65947dd3a8ee432b5f199799/image3.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt038cad66aa0954dc/65947dfeb0fbcbe233627e12/image8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb204b4ea9c897b43/65947e27a8ee43168219979e/image2.png | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to migrate from a relational database management system (RDBMS) to the Document Model of MongoDB using the MongoDB Relational Migrator utility.",
"contentType": "Tutorial"
} | Easy Migration: From Relational Database to MongoDB with MongoDB Relational Migrator | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/build-smart-applications-atlas-vector-search-google-vertex-ai | created | # Build Smart Applications With Atlas Vector Search and Google Vertex AI
The application development landscape is evolving very rapidly. Today, users crave intuitive, context-aware experiences that understand their intent and deliver relevant results even when queries aren't perfectly phrased, putting an end to the keyword-based search practices. This is where MongoDB Atlas and Google Cloud Vertex AI can help users build and deploy scalable and resilient applications.
MongoDB Atlas Vector Search is a cutting-edge tool that indexes and stores high-dimensional vectors, representing the essence of your data. It allows you to perform lightning-fast similarity searches, retrieving results based on meaning and context. Google Vertex AI is a comprehensive AI platform that houses an abundance of pre-trained models and tools, including the powerful Vertex AI PALM. This language model excels at extracting semantic representations from text data, generating those crucial vectors that fuel MongoDB Atlas Vector Search.
Vector Search can be useful in a variety of contexts, such as natural language processing and recommendation systems. It is a powerful technique that can be used to find similar data based on its meaning.
In this tutorial, we will see how to get started with MongoDB Atlas and Vertex AI. If you are new to MongoDB Atlas, refer to the documentation to get set up from Google Cloud Marketplace or use the Atlas registration page.
## Before we begin
Make sure that you have the below prerequisites set up before starting to test your application.
1. MongoDB Atlas access, either by the registration page or from Google Cloud Marketplace
2. Access to Google Cloud Project to deploy and create a Compute Engine instance
## How to get set up
Let us consider a use case where we are loading sample PDF documents to MongoDB Atlas as vectors and deploying an application on Google Cloud to perform a vector search on the PDF documents.
We will start with the creation of MongoDB Atlas Vector Search Index on the collection to store and retrieve the vectors generated by the Google Vertex AI PALM model. To store and access vectors on MongoDB Atlas, we need to create an Atlas Search index.
### Create an Atlas Search index
1. Navigate to the **Database Deployments** page for your project.
2. Click on **Create Database.** Name your Database **vertexaiApp** and your collection **chat-vec**.
3. Click **Atlas Search** from the Services menu in the navigation bar.
4. Click **Create Search Index** and select **JSON Editor** under **Atlas Vector Search**. Then, click **Next.**
5. In the **Database and Collection** section, find the database **vertexaiApp**, and select the **chat-vec** collection.
6. Replace the default definition with the following index definition and then click **Next**. Click on **Create Search index** on the review page.
```json
{
"fields":
{
"type":"vector",
"path":"vec",
"numDimensions":768,
"similarity": "cosine"
}
]
}
```
### Create a Google Cloud Compute instance
We will create a [Google Cloud virtual machine instance to run and deploy the application. The Google Cloud VM can have all the default configurations. To begin, log into your Google Cloud Console and perform the following steps:
- In the Google Cloud console, click on **Navigation menu > Compute Engine.**
- Create a new VM instance with the below configurations:
- **Name:** vertexai-chatapp
- **Region**: region near your physical location
- **Machine configurations:**
- Machine type: High Memory, n1-standard-1
- Boot disk: Click on **CHANGE**
- Increase the size to 100 GB.
- Leave the other options to default (Debian).
- Access: Select **Allow full access** to all Cloud APIs.
- Firewall: Select all.
- Advanced options:
- Networking: Expand the default network interface.
- For External IP range: Expand the section and click on **RESERVE STATIC EXTERNAL IP ADDRESS**. This will help users to access the deployed application from the internet.
- Name your IP and click on **Done**.
- Click on **CREATE** and the VM will be created in about two to three minutes.
### Deploy the application
Once the VM instance is created, SSH into the VM instance and clone the GitHub repository.
```
git clone https://github.com/mongodb-partners/MongoDB-VertexAI-Qwiklab.git
```
The repository contains a script to create and deploy a Streamlit application to transform and store PDFs in MongoDB Atlas, then search them lightning-fast with Atlas Vector Search. The app.py script in the repository uses Python and LangChain to leverage MongoDB Atlas as our data source and Google Vertex AI for generating embeddings.
We start by setting up connections and then utilize LangChain’s ChatVertexAI and Google's Vertex AI embeddings to transform the PDF being loaded into searchable vectors. Finally, we constructed the Streamlit app structure, enabling users to input queries and view the top retrieved documents based on vector similarity.
Install the required dependencies on your virtual machine using the below commands:
```bash
sudo apt update
sudo apt install python3-pip
sudo apt install git
git --version
pip3 --version
cd MongoDB-VertexAI-Qwiklab
pip3 install -r requirements.txt
```
Once the requirements are installed, you can run the application using the below command. Open the application using the public IP of your VM and the port mentioned in the command output:
```bash
streamlit run app.py
```
using our pay-as-you-go model and take advantage of our simplified billing.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted03916b8de19681/65a7ede75d51352518fb89d6/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltacbaf40c14ecccbc/65a7ee01cdbb961f8ac4faa4/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc7a3b3c7a2db880c/65a7ee187a1dd7b984e136ba/3.png | md | {
"tags": [
"Atlas",
"Python",
"Google Cloud"
],
"pageDescription": "Learn how to leverage MongoDB Atlas Vector Search to perform semantic search, Google Vertex AI for AI capabilities, and LangChain for seamless integration to build smart applications.",
"contentType": "Tutorial"
} | Build Smart Applications With Atlas Vector Search and Google Vertex AI | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/johns-hopkins-university-covid-19-rest-api | created | # A Free REST API for Johns Hopkins University COVID-19 dataset
## TL;DR
> Here is the REST API Documentation in Postman.
## News
### November 15th, 2023
- John Hopkins University (JHU) has stopped collecting data as of March 10th, 2023.
- Here is JHU's GitHub repository.
- First data entry is 2020-01-22, last one is 2023-03-09.
- Current REST API is implemented using Third-Party Services which is now deprecated.
- Hosting the REST API honestly isn't very valuable now as the data isn't updated anymore and the entire cluster is available below.
- The REST API will be removed on November 1st, 2024; but possibly earlier as it's currently mostly being queried for dates after the last entry.
### December 10th, 2020
- Added 3 new calculated fields:
- confirmed_daily.
- deaths_daily.
- recovered_daily.
### September 10th, 2020
- Let me know what you think in our topic in the community forum.
- Fixed a bug in my code which was failing if the IP address wasn't collected properly.
## Introduction
Recently, we built the MongoDB COVID-19 Open Data project using the dataset from Johns Hopkins University (JHU).
There are two big advantages to using this cluster, rather than directly using JHU's CSV files:
- It's updated automatically every hour so any update in JHU's repo will be there after a maximum of one hour.
- You don't need to clean, parse and transform the CSV files, our script does this for you!
The MongoDB Atlas cluster is freely accessible using the user `readonly` and the password `readonly` using the connection string:
```none
mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19
```
You can use this cluster to build your application, but what about having a nice and free REST API to access this curated dataset?!
## COVID-19 REST API
> Here is the REST API Documentation in Postman.
You can use the button in the top right corner **Run in Postman** to directly import these examples in Postman and give them a spin.
that can help you scale and hopefully solve this global pandemic.
## But how did I build this?
Simple and easy, I used the MongoDB App Services Third-Party HTTP services to build my HTTP webhooks.
> Third-Party Services are now deprecated. Please use custom HTTPS Endpoints instead from now on.
Each time you call an API, a serverless JavaScript function is executed to fetch your documents. Let's look at the three parts of this function separately, for the **Global & US** webhook (the most detailed cllection!):
- First, I log the IP address each time a webhook is called. I'm using the IP address for my `_id` field which permits me to use an upsert operation.
```javascript
function log_ip(payload) {
const log = context.services.get("pre-prod").db("logs").collection("ip");
let ip = "IP missing";
try {
ip = payload.headers"X-Envoy-External-Address"][0];
} catch (error) {
console.log("Can't retrieve IP address.")
}
console.log(ip);
log.updateOne({"_id": ip}, {"$inc": {"queries": 1}}, {"upsert": true})
.then( result => {
console.log("IP + 1: " + ip);
});
}
```
- Then I retrieve the query parameters and I build the query that I'm sending to the MongoDB cluster along with the [projection and sort options.
```javascript
function isPositiveInteger(str) {
return ((parseInt(str, 10).toString() == str) && str.indexOf('-') === -1);
}
exports = function(payload, response) {
log_ip(payload);
const {uid, country, state, country_iso3, min_date, max_date, hide_fields} = payload.query;
const coll = context.services.get("mongodb-atlas").db("covid19").collection("global_and_us");
var query = {};
var project = {};
const sort = {'date': 1};
if (uid !== undefined && isPositiveInteger(uid)) {
query.uid = parseInt(uid, 10);
}
if (country !== undefined) {
query.country = country;
}
if (state !== undefined) {
query.state = state;
}
if (country_iso3 !== undefined) {
query.country_iso3 = country_iso3;
}
if (min_date !== undefined && max_date === undefined) {
query.date = {'$gte': new Date(min_date)};
}
if (max_date !== undefined && min_date === undefined) {
query.date = {'$lte': new Date(max_date)};
}
if (min_date !== undefined && max_date !== undefined) {
query.date = {'$gte': new Date(min_date), '$lte': new Date(max_date)};
}
if (hide_fields !== undefined) {
const fields = hide_fields.split(',');
for (let i = 0; i < fields.length; i++) {
projectfields[i].trim()] = 0
}
}
console.log('Query: ' + JSON.stringify(query));
console.log('Projection: ' + JSON.stringify(project));
// [...]
};
```
- Finally, I build the answer with the documents from the cluster and I'm adding a `Contact` header so you can send us an email if you want to reach out.
```javascript
exports = function(payload, response) {
// [...]
coll.find(query, project).sort(sort).toArray()
.then( docs => {
response.setBody(JSON.stringify(docs));
response.setHeader("Contact","devrel@mongodb.com");
});
};
```
Here is the entire JavaScript function if you want to copy & paste it.
```javascript
function isPositiveInteger(str) {
return ((parseInt(str, 10).toString() == str) && str.indexOf('-') === -1);
}
function log_ip(payload) {
const log = context.services.get("pre-prod").db("logs").collection("ip");
let ip = "IP missing";
try {
ip = payload.headers["X-Envoy-External-Address"][0];
} catch (error) {
console.log("Can't retrieve IP address.")
}
console.log(ip);
log.updateOne({"_id": ip}, {"$inc": {"queries": 1}}, {"upsert": true})
.then( result => {
console.log("IP + 1: " + ip);
});
}
exports = function(payload, response) {
log_ip(payload);
const {uid, country, state, country_iso3, min_date, max_date, hide_fields} = payload.query;
const coll = context.services.get("mongodb-atlas").db("covid19").collection("global_and_us");
var query = {};
var project = {};
const sort = {'date': 1};
if (uid !== undefined && isPositiveInteger(uid)) {
query.uid = parseInt(uid, 10);
}
if (country !== undefined) {
query.country = country;
}
if (state !== undefined) {
query.state = state;
}
if (country_iso3 !== undefined) {
query.country_iso3 = country_iso3;
}
if (min_date !== undefined && max_date === undefined) {
query.date = {'$gte': new Date(min_date)};
}
if (max_date !== undefined && min_date === undefined) {
query.date = {'$lte': new Date(max_date)};
}
if (min_date !== undefined && max_date !== undefined) {
query.date = {'$gte': new Date(min_date), '$lte': new Date(max_date)};
}
if (hide_fields !== undefined) {
const fields = hide_fields.split(',');
for (let i = 0; i < fields.length; i++) {
project[fields[i].trim()] = 0
}
}
console.log('Query: ' + JSON.stringify(query));
console.log('Projection: ' + JSON.stringify(project));
coll.find(query, project).sort(sort).toArray()
.then( docs => {
response.setBody(JSON.stringify(docs));
response.setHeader("Contact","devrel@mongodb.com");
});
};
```
One detail to note: the payload is limited to 1MB per query. If you want to consume more data, I would recommend using the MongoDB cluster directly, as mentioned earlier, or I would filter the output to only the return the fields you really need using the `hide_fields` parameter. See the [documentation for more details.
## Examples
Here are a couple of example of how to run a query.
- With this one you can retrieve all the metadata which will help you populate the query parameters in your other queries:
```shell
curl --location --request GET 'https://webhooks.mongodb-stitch.com/api/client/v2.0/app/covid-19-qppza/service/REST-API/incoming_webhook/metadata'
```
- The `covid19.global_and_us` collection is probably the most complete database in this system as it combines all the data from JHU's time series into a single collection. With the following query, you can filter down what you need from this collection:
```shell
curl --location --request GET 'https://webhooks.mongodb-stitch.com/api/client/v2.0/app/covid-19-qppza/service/REST-API/incoming_webhook/global_and_us?country=Canada&state=Alberta&min_date=2020-04-22T00:00:00.000Z&max_date=2020-04-27T00:00:00.000Z&hide_fields=_id,%20country,%20country_code,%20country_iso2,%20country_iso3,%20loc,%20state'
```
Again, the REST API documentation in Postman is the place to go to review all the options that are offered to you.
## Wrap Up
I truly hope you will be able to build something amazing with this REST API. Even if it won't save the world from this COVID-19 pandemic, I hope it will be a great source of motivation and training for your next pet project.
Send me a tweet with your project, I will definitely check it out!
> If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blteee2f1e2d29d4361/6554356bf146760db015a198/postman_arrow.png
| md | {
"tags": [
"Atlas",
"Serverless",
"Postman API"
],
"pageDescription": "Making the Johns Hopkins University COVID-19 Data open and accessible to all, with MongoDB, through a simple REST API.",
"contentType": "Article"
} | A Free REST API for Johns Hopkins University COVID-19 dataset | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/wordle-solving-mongodb-query-api-operators | created | # Wordle Solving Using MongoDB Query API Operators
This article details one of my MongoDB Atlas learning journeys. I joined MongoDB in the fall of 2022 as a Developer Advocate for Atlas Search. With a couple of decades of Lucene experience, I know search, but I had little experience with MongoDB itself. As part of my initiation, I needed to learn the MongoDB Query API, and coupled my learning with my Wordle interest.
## Game on: Introduction to Wordle
The online game Wordle took the world by storm in 2022. For many, including myself, Wordle has become a part of the daily routine. If you’re not familiar with Wordle, let me first apologize for introducing you to your next favorite time sink. The Wordle word guessing game gives you six chances to guess the five-letter word of the day. After a guess, each letter of the guessed word is marked with clues indicating how well it matches the answer. Let’s jump right into an example, with our first guess being the word `ZESTY`. Wordle gives us these hints after that guess:
The hints tell us that the letter E is in the goal word though not in the second position and that the letters `Z`, `S`, `T`, and `Y` are not in the solution in any position. Our next guess factors in these clues, giving us more information about the answer:
Do you know the answer at this point? Before we reveal it, let’s learn some MongoDB and build a tool to help us choose possible solutions given the hints we know.
## Modeling the data as BSON in MongoDB
We can easily use the forever free tier of hosted MongoDB, called Atlas. To play along, visit the Atlas homepage and create a new account or log in to your existing one.
Once you have an Atlas account, create a database to contain a collection of words. All of the possible words that can be guessed or used as daily answers are built into the source code of the single page Wordle app itself. These words have been extracted into a list that we can quickly ingest into our Atlas collection.
I created a repository for the data and code here. The README shows how to import the word list into your Atlas collection so you can play along.
The query operations needed are:
* Find words that have a specific letter in an exact position.
* Find words that do not contain any of a set of letters.
* Find words that contain a set of specified letters, but not in any known positions.
In order to accommodate these types of criteria, a word document looks like this, using the word MONGO to illustrate:
```
{
"_id":"MONGO",
"letter1":"M",
"letter2":"O",
"letter3":"N",
"letter4":"G",
"letter5":"O",
"letters":"M","O","N","G"]
}
```
## Finding the matches with the MongoDB Query API
Each word is its own document and structured to facilitate the types of queries needed. I come from a background of full-text search where it makes sense to break down documents into the atomic findable units for clean query-ability and performance. There are, no doubt, other ways to implement the document structure and query patterns for this challenge, but bear with me while we learn how to use MongoDB Query API with this particular structure. Each letter position of the word has its own field, so we can query for exact matches. There is also a catch-all field containing an array of all unique characters in the word so queries do not have to be necessarily concerned with positions.
Let’s build up the MongoDB Query API to find words that match the hints from our initial guess. First, what words do not contain `Z`, `S`, `T`, or `Y`? Using MongoDB Query API [query operators in a `.find()` API call, we can use the `$nin` \(not in\) operator as follows:
```
{
"letters":{
"$nin":"Z","S","T","Y"]
}
}
```
Independently, a `.find()` for all words that have a letter `E` but not in the second position looks like this, using the [`$all` operator as there could be potentially multiple letters we know are in the solution but not which position they are in:
```
{
"letters":{
"$all":"E"]
},
"letter2":{"$nin":["E"]}
}
```
To find the possible solutions, we combine all criteria for all the hints. After our `ZESTY` guess, the full `.find()` criteria is:
```
{
"letters":{
"$nin":["Z","S","T","Y"],
"$all":["E"]
},
"letter2":{"$nin":["E"]}
}
```
Out of the universe of all 2,309 words, there are 394 words possible after our first guess.
Now on to our second guess, `BREAD`, which gave us several other tidbits of information about the answer. We now know that the answer also does not contain the letters `B` or `D`, so we add that to our letters field `$nin` clause. We also know the answer has an `R` and `A` somewhere, but not in the positions we initially guessed. And we have now know the third letter is an `E`, which is matched using the [`$eq` operator. Combining all of this information from both of our guesses, `ZESTY` and `BREAD`, we end up with this criteria:
```
{
"letters":{
"$nin":"Z","S","T","Y","B","D"],
"$all":["E","R","A"]
},
"letter2":{"$nin":["E","R"]},
"letter3":{"$eq":"E"},
"letter4":{"$nin":["A"]}
}
```
Has the answer revealed itself yet to you? If not, go ahead and import the word list into your Atlas cluster and run the aggregation.
It’s tedious to accumulate all of the hints into `.find()` criteria manually, and duplicate letters in the answer can present a challenge when translating the color-coded hints to MongoDB Query API, so I wrote a bit of Ruby code to handle the details. From the command-line, using [this code, the possible words after our first guess looks like this….
```
$ ruby word_guesser.rb "ZESTY x~xxx"
{"letters":{"$nin":"Z","S","T","Y"],"$all":["E"]},"letter2":{"$nin":["E"]}}
ABIDE
ABLED
ABODE
ABOVE
.
.
.
WOVEN
WREAK
WRECK
394
```
The output of running `word_guesser.rb` consists first of the MongoDB Query API generated, followed by all of the possible matching words given the hints provided, ending with the number of words listed. The command-line arguments to the word guessing script are one or more quoted strings consisting of the guessed word and a representation of the hints provided from that word where `x` is a greyed out letter, `~` is a yellow letter, and `^` is a green letter. It’s up to the human solver to pick one of the listed words to try for the next guess. After our second guess, the command and output are:
```
$ ruby word_guesser.rb "ZESTY x~xxx" "BREAD x~^~x"
{"letters":{"$nin":["Z","S","T","Y","B","D"],"$all":["E","R","A"]},"letter2":{"$nin":["E","R"]},"letter3":{"$eq":"E"},"letter4":{"$nin":["A"]}}
OPERA
1
```
Voila, solved! Only one possible word after our second guess.
![OPERA
In summary, this fun exercise allowed me to learn MongoDB’s Query API operators, specifically `$all`, `$eq`, and `$nin` operators for this challenge.
To learn more about the MongoDB Query API, check out these resources:
* Introduction to MongoDB Query API
* Getting Started with Atlas and the MongoDB Query Language \(MQL\)(now referred to as the MongoDB Query API)
* The free MongoDB CRUD Operations: Insert and Find Documents course at MongoDB University | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Let’s learn a few MongoDB Query API operators while solving Wordle",
"contentType": "Article"
} | Wordle Solving Using MongoDB Query API Operators | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/media-storage-integrating-azure-blob-storage-mongodb-spring-boot | created | # Seamless Media Storage: Integrating Azure Blob Storage and MongoDB with Spring Boot
From social media to streaming services, many applications require a mixture of different types of data. If you are designing an application that requires storing images or videos, a good idea is to store your media using a service specially designed to handle large objects of unstructured data.
Your MongoDB database is not the best place to store your media files directly. The maximum BSON document size is 16MB. This helps ensure that a single document cannot use an excessive amount of RAM or, during transmission, an excessive amount of bandwidth. This provides an obstacle as this limit can easily be surpassed by images or videos.
MongoDB provides GridFS as a solution to this problem. MongoDB GridFS is a specification for storing and retrieving large files that exceed the BSON-document size limit and works by dividing the file into chunks and storing each chunk as a separate document. In a second collection, it stores the metadata for these files, including what chunks each file is composed of. While this may work for some use cases, oftentimes, it is a good idea to use a service dedicated to storing large media files and linking to that in your MongoDB document. Azure Blob (**B**inary **L**arge **Ob**jects) Storage is optimized for storing massive amounts of unstructured data and designed for use cases such as serving images directly to a browser, streaming video and audio, etc. Unstructured data is data that doesn't conform to a specific data model or format, such as binary data (how we store our media files).
In this tutorial, we are going to build a Java API with Spring Boot that allows you to upload your files, along with any metadata you wish to store. When you upload your file, such as an image or video, it will upload to Azure Blob Storage. It will store the metadata, along with a link to where the file is stored, in your MongoDB database. This way, you get all the benefits of MongoDB databases while taking advantage of how Azure Blob Storage deals with these large files.
or higher
- Maven or Gradle, but this tutorial will reference Maven
- A MongoDB cluster deployed and configured; if you need help, check out our MongoDB Atlas tutorial on how to get started
- An Azure account with an active subscription
## Set up Azure Storage
There are a couple of different ways you can set up your Azure storage, but we will use the Microsoft Azure Portal. Sign in with your Azure account and it will take you to the home page. At the top of the page, search "Storage accounts."
.
Select the subscription and resource group you wish to use, and give your storage account a name. The region, performance, and redundancy settings are depending on your plans with this application, but the lowest tiers have all the features we need.
In networking, select to enable public access from all networks. This might not be desirable for production but for following along with this tutorial, it allows us to bypass configuring rules for network access.
For everything else, we can accept the default settings. Once your storage account is created, we’re going to navigate to the resource. You can do this by clicking “Go to resource,” or return to the home page and it will be listed under your resources.
The next step is to set up a container. A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. On the left blade, select the containers tab, and click the plus container option. A menu will come up where you name your container (and configure access level if you don't want the default, private). Now, let's launch our container!
In order to connect your application to Azure Storage, create your `Shared Access Signature` (SAS). SAS allows you to have granular control over how your client can access the data. Select “Shared access signature” from the left blade menu and configure it to allow the services and resource types you wish to allow. For this tutorial, select “Object” under the allowed resource types. Object is for blob-level APIs, allowing operations on individual blobs, like uploading, downloading, or deleting an image. The rest of the settings you can leave as the default configuration. If you would like to learn more about what configurations are best suited for your application, check out Microsoft’s documentation. Once you have configured it to your desired settings, click “Generate SAS and connection string.” Your SAS will be generated below this button.
and click “Connect.” If you need help, check out our guide in the docs.
. With MongoRepository, you don’t need to provide implementation code for many basic CRUD methods for your MongoDB database, such as save, findById, findAll, delete, etc. Spring Data MongoDB automatically generates the necessary implementation based on the method names and conventions.
Now that we have the repository set up, it's time to set up our service layer. This acts as the intermediate between our repository (data access layer) and our controller (REST endpoints) and contains the applications business logic. We'll create another package `com.example.azureblob.service` and add our class `ImageMetadataService.java`.
```java
@Service
public class ImageMetadataService {
@Autowired
private ImageMetadataRepository imageMetadataRepository;
@Value("${spring.cloud.azure.storage.blob.container-name}")
private String containerName;
@Value("${azure.blob-storage.connection-string}")
private String connectionString;
private BlobServiceClient blobServiceClient;
@PostConstruct
public void init() {
blobServiceClient = new BlobServiceClientBuilder().connectionString(connectionString).buildClient();
}
public ImageMetadata save(ImageMetadata metadata) {
return imageMetadataRepository.save(metadata);
}
public List findAll() {
return imageMetadataRepository.findAll();
}
public Optional findById(String id) {
return imageMetadataRepository.findById(id);
}
public String uploadImageWithCaption(MultipartFile imageFile, String caption) throws IOException {
String blobFileName = imageFile.getOriginalFilename();
BlobClient blobClient = blobServiceClient.getBlobContainerClient(containerName).getBlobClient(blobFileName);
blobClient.upload(imageFile.getInputStream(), imageFile.getSize(), true);
String imageUrl = blobClient.getBlobUrl();
ImageMetadata metadata = new ImageMetadata();
metadata.setCaption(caption);
metadata.setImageUrl(imageUrl);
imageMetadataRepository.save(metadata);
return "Image and metadata uploaded successfully!";
}
}
```
Here we have a couple of our methods set up for finding our documents in the database and saving our metadata. Our `uploadImageWithCaption` method contains the integration with Azure Blob Storage. Here you can see we create a `BlobServiceClient` to interact with Azure Blob Storage. After it succeeds in uploading the image, it gets the URL of the uploaded blob. It then stores this, along with our other metadata for the image, in our MongoDB database.
Our last step is to set up a controller to establish our endpoints for the application. In a Spring Boot application, controllers handle requests, process data, and produce responses, making it possible to expose APIs and build web applications. Create a package `com.example.azureblob.service` and add the class `ImageMetadataController.java`.
```java
@RestController
@RequestMapping("/image-metadata")
public class ImageMetadataController {
@Autowired
private ImageMetadataService imageMetadataService;
@PostMapping("/upload")
public String uploadImageWithCaption(@RequestParam("image") MultipartFile imageFile, @RequestParam("caption") String caption) throws IOException {
return imageMetadataService.uploadImageWithCaption(imageFile, caption);
}
@GetMapping("/")
public List getAllImageMetadata() {
return imageMetadataService.findAll();
}
@GetMapping("/{id}")
public ImageMetadata getImageMetadataById(@PathVariable String id) {
return imageMetadataService.findById(id).orElse(null);
}
}
```
Here we're able to retrieve all our metadata documents or search by `_id`, and we are able to upload our documents.
This should be everything you need to upload your files and store the metadata in MongoDB. Let's test it out! You can use your favorite tool for testing APIs but I'll be using a cURL command.
```console
curl -F "image=mongodb-is-webscale.png" -F "caption=MongoDB is Webscale" http://localhost:8080/blob/upload
```
Now, let's check how that looks in our database and Azure storage. If we look in our collection in MongoDB, we can see our metadata, including the URL to the image. Here we just have a few fields, but depending on your application, you might want to store information like when this document was created, the filetype of the data being stored, or even the size of the file.
, such as How to Use Azure Functions with MongoDB Atlas in Java.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb4993e491194e080/65783b3f2a3de32470d701d9/image3.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte01b0191c2bca39b/65783b407cf4a90e91f5d32e/image5.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt64cc0729a4a237b9/65783b400d1fdc1b0b574582/image1.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf9ecb0e59cab32f5/65783b40bd48af73c3f67c16/image6.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfb7a72157e335d34/65783b400f54458167a01f00/image4.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3117132d54aee8a1/65783b40fd77da8557159020/image7.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0807cbd5641b1011/65783b402813253a07cd197e/image2.png | md | {
"tags": [
"Atlas",
"Java",
"Spring",
"Azure"
],
"pageDescription": "This tutorial describes how to build a Spring Boot Application to upload your media files into Azure Blob Storage, while storing associated metadata in MongoDB.",
"contentType": "Tutorial"
} | Seamless Media Storage: Integrating Azure Blob Storage and MongoDB with Spring Boot | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/azure-functions-mongodb-atlas-java | created | # How to Use Azure Functions with MongoDB Atlas in Java
Cloud computing is one of the most discussed topics in the tech industry. Having the ability to scale your infrastructure up and down instantly is just one of the many benefits associated with serverless apps. In this article, we are going write the function as a service (FaaS) — i.e., a serverless function that will interact with data via a database, to produce meaningful results. FaaS can also be very useful in A/B testing when you want to quickly release an independent function without going into actual implementation or release.
> In this article, you'll learn how to use MongoDB Atlas, a cloud database, when you're getting started with Azure functions in Java.
## Prerequisites
1. A Microsoft Azure account that we will be using for running and deploying our serverless function. If you don't have one, you can sign up for free.
2. A MongoDB Atlas account, which is a cloud-based document database. You can sign up for an account for free.
3. IntelliJ IDEA Community Edition to aid our development
activities for this tutorial. If this is not your preferred IDE, then you can use other IDEs like Eclipse, Visual Studio, etc., but the steps will be slightly different.
4. An Azure supported Java Development Kit (JDK) for Java, version 8 or 11.
5. A basic understanding of the Java programming language.
## Serverless function: Hello World!
Getting started with the Azure serverless function is very simple, thanks to the Azure IntelliJ plugin, which offers various features — from generating boilerplate code to the deployment of the Azure function. So, before we jump into actual code, let's install the plugin.
### Installing the Azure plugin
The Azure plugin can be installed on IntelliJ in a very standard manner using the IntelliJ plugin manager. Open Plugins and then search for "_Azure Toolkit for IntelliJ_" in the Marketplace. Click Install.
With this, we are ready to create our first Azure function.
### First Azure function
Now, let's create a project that will contain our function and have the necessary dependencies to execute it. Go ahead and select File > New > Project from the menu bar, select Azure functions from Generators as shown below, and hit Next.
Now we can edit the project details if needed, or you can leave them on default.
In the last step, update the name of the project and location.
With this complete, we have a bootstrapped project with a sample function implementation. Without further ado, let's run this and see it in action.
### Deploying and running
We can deploy the Azure function either locally or on the cloud. Let's start by deploying it locally. To deploy and run locally, press the play icon against the function name on line 20, as shown in the above screenshot, and select run from the dialogue.
Copy the URL shown in the console log and open it in the browser to run the Azure function.
This will prompt passing the name as a query parameter as defined in the bootstrapped function.
```java
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
}
```
Update the URL by appending the query parameter `name` to
`http://localhost:XXXXX/api/HttpExample?name=World`, which will print the desired result.
To learn more, you can also follow the official guide.
## Connecting the serverless function with MongoDB Atlas
In the previous step, we created our first Azure function, which takes user input and returns a result. But real-world applications are far more complicated than this. In order to create a real-world function, which we will do in the next section, we need to
understand how to connect our function with a database, as logic operates over data and databases hold the data.
Similar to the serverless function, let's use a database that is also on the cloud and has the ability to scale up and down as needed. We'll be using MongoDB Atlas, which is a document-based cloud database.
### Setting up an Atlas account
Creating an Atlas account is very straightforward, free forever, and perfect to validate any MVP project idea, but if you need a guide, you can follow the documentation.
### Adding the Azure function IP address in Atlas Network Config
The Azure function uses multiple IP addresses instead of a single address, so let's add them to Atlas. To get the range of IP addresses, open your Azure account and search networking inside your Azure virtual machine. Copy the outbound addresses from outbound traffic.
One of the steps while creating an account with Atlas is to add the IP address for accepting incoming connection requests. This is essential to prevent unwanted access to our database. In our case, Atlas will get all the connection requests from the Azure function, so let's add this address.
Add these to the IP individually under Network Access.
### Installing dependency to interact with Atlas
There are various ways of interacting with Atlas. Since we are building a service using a serverless function in Java, my preference is to use MongoDB Java driver. So, let's add the dependency for the driver in the `build.gradle` file.
```groovy
dependencies {
implementation 'com.microsoft.azure.functions:azure-functions-java-library:3.0.0'
// dependency for MongoDB Java driver
implementation 'org.mongodb:mongodb-driver-sync:4.9.0'
}
```
With this, our project is ready to connect and interact with our cloud database.
## Building an Azure function with Atlas
With all the prerequisites done, let's build our first real-world function using the MongoDB sample dataset for movies. In this project, we'll build two functions: One returns the count of the
total movies in the collection, and the other returns the movie document based on the year of release.
Let's generate the boilerplate code for the function by right-clicking on the package name and then selecting New > Azure function class. We'll call this function class `Movies`.
```java
public class Movies {
/**
* This function listens at endpoint "/api/Movie". Two ways to invoke it using "curl" command in bash:
* 1. curl -d "HTTP Body" {your host}/api/Movie
* 2. curl {your host}/api/Movie?name=HTTP%20Query
*/
@FunctionName("Movies")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse query parameter
String query = request.getQueryParameters().get("name");
String name = request.getBody().orElse(query);
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body("Please pass a name on the query string or in the request body").build();
} else {
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
}
}
}
```
Now, let's:
1. Update `@FunctionName` parameter from `Movies` to `getMoviesCount`.
2. Rename the function name from `run` to `getMoviesCount`.
3. Remove the `query` and `name` variables, as we don't have any query parameters.
Our updated code looks like this.
```java
public class Movies {
@FunctionName("getMoviesCount")
public HttpResponseMessage getMoviesCount(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
return request.createResponseBuilder(HttpStatus.OK).body("Hello").build();
}
}
```
To connect with MongoDB Atlas using the Java driver, we first need a connection string that can be found when we press to connect to our cluster on our Atlas account. For details, you can also refer to the documentation.
Using the connection string, we can create an instance of `MongoClients` that can be used to open connection from the `database`.
```java
public class Movies {
private static final String MONGODB_CONNECTION_URI = "mongodb+srv://xxxxx@cluster0.xxxx.mongodb.net/?retryWrites=true&w=majority";
private static final String DATABASE_NAME = "sample_mflix";
private static final String COLLECTION_NAME = "movies";
private static MongoDatabase database = null;
private static MongoDatabase createDatabaseConnection() {
if (database == null) {
try {
MongoClient client = MongoClients.create(MONGODB_CONNECTION_URI);
database = client.getDatabase(DATABASE_NAME);
} catch (Exception e) {
throw new IllegalStateException("Error in creating MongoDB client");
}
}
return database;
}
/*@FunctionName("getMoviesCount")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
return request.createResponseBuilder(HttpStatus.OK).body("Hello").build();
}*/
}
```
We can query our database for the total number of movies in the collection, as shown below.
```java
long totalRecords=database.getCollection(COLLECTION_NAME).countDocuments();
```
Updated code for `getMoviesCount` function looks like this.
```java
@FunctionName("getMoviesCount")
public HttpResponseMessage getMoviesCount(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET},
authLevel = AuthorizationLevel.ANONYMOUS
) HttpRequestMessage> request,
final ExecutionContext context) {
if (database != null) {
long totalRecords = database.getCollection(COLLECTION_NAME).countDocuments();
return request.createResponseBuilder(HttpStatus.OK).body("Total Records, " + totalRecords + " - At:" + System.currentTimeMillis()).build();
} else {
return request.createResponseBuilder(HttpStatus.INTERNAL_SERVER_ERROR).build();
}
}
```
Now let's deploy this code locally and on the cloud to validate the output. We'll use Postman.
Copy the URL from the console output and paste it on Postman to validate the output.
Let's deploy this on the Azure cloud on a `Linux` machine. Click on `Azure Explore` and select Functions App to create a virtual machine (VM).
Now right-click on the Azure function and select Create.
Change the platform to `Linux` with `Java 1.8`.
> If for some reason you don't want to change the platform and would like use Window OS, then add standard DNS route before making a network request.
> ```java
> System.setProperty("java.naming.provider.url", "dns://8.8.8.8");
> ```
After a few minutes, you'll notice the VM we just created under `Function App`. Now, we can deploy our app onto it.
Press Run to deploy it.
Once deployment is successful, you'll find the `URL` of the serverless function.
Again, we'll copy this `URL` and validate using Postman.
With this, we have successfully connected our first function with
MongoDB Atlas. Now, let's take it to next level. We'll create another function that returns a movie document based on the year of release.
Let's add the boilerplate code again.
```java
@FunctionName("getMoviesByYear")
public HttpResponseMessage getMoviesByYear(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET},
authLevel = AuthorizationLevel.ANONYMOUS
) HttpRequestMessage> request,
final ExecutionContext context) {
}
```
To capture the user input year that will be used to query and gather information from the collection, add this code in:
```java
final int yearRequestParam = valueOf(request.getQueryParameters().get("year"));
```
To use this information for querying, we create a `Filters` object that can pass as input for `find` function.
```java
Bson filter = Filters.eq("year", yearRequestParam);
Document result = collection.find(filter).first();
```
The updated code is:
```java
@FunctionName("getMoviesByYear")
public HttpResponseMessage getMoviesByYear(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET},
authLevel = AuthorizationLevel.ANONYMOUS
) HttpRequestMessage> request,
final ExecutionContext context) {
final int yearRequestParam = valueOf(request.getQueryParameters().get("year"));
MongoCollection collection = database.getCollection(COLLECTION_NAME);
if (database != null) {
Bson filter = Filters.eq("year", yearRequestParam);
Document result = collection.find(filter).first();
return request.createResponseBuilder(HttpStatus.OK).body(result.toJson()).build();
} else {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body("Year missing").build();
}
}
```
Now let's validate this against Postman.
The last step in making our app production-ready is to secure the connection `URI`, as it contains credentials and should be kept private. One way of securing it is storing it into an environment variable.
Adding an environment variable in the Azure function can be done via the Azure portal and Azure IntelliJ plugin, as well. For now, we'll use the Azure IntelliJ plugin, so go ahead and open Azure Explore in IntelliJ.
Then, we select `Function App` and right-click `Show Properties`.
This will open a tab with all existing properties. We add our property into it.
Now we can update our function code to use this variable. From
```java
private static final String MONGODB_CONNECTION_URI = "mongodb+srv://xxxxx:xxxx@cluster0.xxxxx.mongodb.net/?retryWrites=true&w=majority";
```
to
```java
private static final String MONGODB_CONNECTION_URI = System.getenv("MongoDB_Connection_URL");
```
After redeploying the code, we are all set to use this app in production.
## Summary
Thank you for reading — hopefully you find this article informative! The complete source code of the app can be found on GitHub.
If you're looking for something similar using the Node.js runtime, check out the other tutorial on the subject.
With MongoDB Atlas on Microsoft Azure, developers receive access to the most comprehensive, secure, scalable, and cloud–based developer data platform on the market. Now, with the availability of Atlas on the Azure Marketplace, it’s never been easier for users to start building with Atlas while streamlining procurement and billing processes. Get started today through the Atlas on Azure Marketplace listing.
If you have any queries or comments, you can share them on the MongoDB forum or tweet me @codeWithMohit. | md | {
"tags": [
"Atlas",
"Java",
"Azure"
],
"pageDescription": "In this article, you'll learn how to use MongoDB Atlas, a cloud database, when you're getting started with Azure functions in Java.",
"contentType": "Tutorial"
} | How to Use Azure Functions with MongoDB Atlas in Java | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/langchain-vector-search | created | # Introduction to LangChain and MongoDB Atlas Vector Search
In this tutorial, we will leverage the power of LangChain, MongoDB, and OpenAI to ingest and process data created after ChatGPT-3.5. Follow along to create your own chatbot that can read lengthy documents and provide insightful answers to complex queries!
### What is LangChain?
LangChain is a versatile Python library that enables developers to build applications that are powered by large language models (LLMs). LangChain actually helps facilitate the integration of various LLMs (ChatGPT-3, Hugging Face, etc.) in other applications and understand and utilize recent information. As mentioned in the name, LangChain chains together different components, which are called links, to create a workflow. Each individual link performs a different task in the process, such as accessing a data source, calling a language model, processing output, etc. Since the order of these links can be moved around to create different workflows, LangChain is super flexible and can be used to build a large variety of applications.
### LangChain and MongoDB
MongoDB integrates nicely with LangChain because of the semantic search capabilities provided by MongoDB Atlas’s vector search engine. This allows for the perfect combination where users can query based on meaning rather than by specific words! Apart from MongoDB LangChain Python integration and MongoDB LangChain Javascript integration, MongoDB recently partnered with LangChain on the LangChain templates release to make it easier for developers to build AI-powered apps.
## Prerequisites for success
- MongoDB Atlas account
- OpenAI API account and your API key
- IDE of your choice (this tutorial uses Google Colab)
## Diving into the tutorial
Our first step is to ensure we’re downloading all the crucial packages we need to be successful in this tutorial. In Google Colab, please run the following command:
```
!pip install langchain pypdf pymongo openai python-dotenv tiktoken
```
Here, we’re installing six different packages in one. The first package is `langchain` (the package for the framework we are using to integrate language model capabilities), `pypdf` (a library for working with PDF documents in Python), `pymongo` (the official MongoDB driver for Python so we can interact with our database from our application), `openai` (so we can use OpenAI’s language models), `python-dotenv` (a library used to read key-value pairs from a .env file), and `tiktoken` (a package for token handling).
### Environment configuration
Once this command has been run and our packages have been successfully downloaded, let’s configure our environment. Prior to doing this step, please ensure you have saved your OpenAI API key and your connection string from your MongoDB Atlas cluster in a `.env` file at the root of your project. Help on finding your MongoDB Atlas connection string can be found in the docs.
```
import os
from dotenv import load_dotenv
from pymongo import MongoClient
load_dotenv(override=True)
# Add an environment file to the notebook root directory called .env with MONGO_URI="xxx" to load these environment variables
OPENAI_API_KEY = os.environ"OPENAI_API_KEY"]
MONGO_URI = os.environ["MONGO_URI"]
DB_NAME = "langchain-test-2"
COLLECTION_NAME = "test"
ATLAS_VECTOR_SEARCH_INDEX_NAME = "default"
EMBEDDING_FIELD_NAME = "embedding"
client = MongoClient(MONGO_URI)
db = client[DB_NAME]
collection = db[COLLECTION_NAME]
```
Please feel free to name your database, collection, and even your vector search index anything you like. Just continue to use the same names throughout the tutorial. The success of this code block ensures that both your database and collection are created in your MongoDB cluster.
### Loading in our data
We are going to be loading in the `GPT-4 Technical Report` PDF. As mentioned above, this report came out after OpenAI’s ChatGPT information cutoff date, so the learning model isn’t trained to answer questions about the information included in this 100-page document.
The LangChain package will help us answer any questions we have about this PDF. Let’s load in our data:
```
from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import MongoDBAtlasVectorSearch
loader = PyPDFLoader("https://arxiv.org/pdf/2303.08774.pdf")
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 50)
docs = text_splitter.split_documents(data)
# insert the documents in MongoDB Atlas Vector Search
x = MongoDBAtlasVectorSearch.from_documents(
documents=docs, embedding=OpenAIEmbeddings(disallowed_special=()), collection=MONGODB_COLLECTION, index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME
)
```
In this code block, we are loading in our PDF, using a command to split up the data into various chunks, and then we are inserting the documents into our collection so we can use our search index on the inserted data.
To test and make sure our data is properly loaded in, run a test:
```
docs[0]
```
Your output should look like this:
![output from our docs[0] command to see if our data is loaded correctly
### Creating our search index
Let’s head over to our MongoDB Atlas user interface to create our Vector Search Index.
First, click on the “Search” tab and then on “Create Search Index.” You’ll be taken to this page. Please click on “JSON Editor.”
Please make sure the correct database and collection are pressed, and make sure you have the correct index name chosen that was defined above. Then, paste in the search index we are using for this tutorial:
```
{
"fields":
{
"type": "vector",
"path": "embedding",
"numDimensions": 1536,
"similarity": "cosine"
},
{
"type": "filter",
"path": "source"
}
]
}
```
These fields are to specify the field name in our documents. With `embedding`, we are specifying that the dimensions of the model used to embed are `1536`, and the similarity function used to find the nearest k neighbors is `cosine`. It’s crucial that the dimensions in our search index match that of the language model we are using to embed our data.
Check out our [Vector Search documentation for more information on the index configuration settings.
Once set up, it’ll look like this:
Create the search index and let it load.
## Querying our data
Now, we’re ready to query our data! We are going to show various ways of querying our data in this tutorial. We are going to utilize filters along with Vector Search to see our results. Let’s get started. Please ensure you are connected to your cluster prior to attempting to query or it will not work.
### Semantic search in LangChain
To get started, let’s first see an example using LangChain to perform a semantic search:
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import MongoDBAtlasVectorSearch
vector_search = MongoDBAtlasVectorSearch.from_connection_string(
MONGO_URI,
DB_NAME + "." + COLLECTION_NAME,
OpenAIEmbeddings(disallowed_special=()),
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME
)
query = "gpt-4"
results = vector_search.similarity_search(
query=query,
k=20,
)
for result in results:
print( result)
```
This gives the output:
This gives us the relevant results that semantically match the intent behind the question. Now, let’s see what happens when we ask a question using LangChain.
### Question and answering in LangChain
Run this code block to see what happens when we ask questions to see our results:
```
qa_retriever = vector_search.as_retriever(
search_type="similarity",
search_kwargs={
"k": 200,
"post_filter_pipeline": {"$limit": 25}]
}
)
from langchain.prompts import PromptTemplate
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
qa = RetrievalQA.from_chain_type(llm=OpenAI(),chain_type="stuff", retriever=qa_retriever, return_source_documents=True, chain_type_kwargs={"prompt": PROMPT})
docs = qa({"query": "gpt-4 compute requirements"})
print(docs["result"])
print(docs['source_documents'])
```
After this is run, we get the result:
```
GPT-4 requires a large amount of compute for training, it took 45 petaflops-days of compute to train the model. [Document(page_content='gpt3.5Figure 4. GPT performance on academic and professional exams. In each case, we simulate
```
This provides a succinct answer to our question, based on the data source provided.
## Conclusion
Congratulations! You have successfully loaded in external data and queried it using LangChain and MongoDB. For more information on MongoDB Vector Search, please visit our [documentation
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "This comprehensive tutorial takes you through how to integrate LangChain with MongoDB Atlas Vector Search.",
"contentType": "Tutorial"
} | Introduction to LangChain and MongoDB Atlas Vector Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/gaming-startups-2023 | created | # MongoDB Atlas for Gaming, Startups to Watch in 2023
In the early days, and up until a decade ago, games were mostly about graphics prowess and fun game play that keep players coming back, wanting for more. And that's still the case today, but modern games have proven that data is also a crucial part of video games.
As developers leverage a data platform like MongoDB Atlas for gaming, they can do more, faster, and make the game better by focusing engineering resources on the player's experience, which can be tailored thanks to insights leveraged during the game sessions. The experience can continue outside the game too, especially with the rise in popularity of eSports and their legions of fans who gather around a fandom.
## Yile Technology
*Mezi Wu, Research and Development Manager at Yile Technology (left) and Yi-Zheng Lin, Senior Database Administrator of Yile Technology*
Yile Technology Co. Ltd. is a mobile game development company founded in 2018 in Taiwan. Since then, it has developed social games that have quickly acquired a large audience. For example, its Online808 social casino game has rapidly crossed the 1M members mark as Yile focuses intensely on user experience improvement and game optimization.
Yile developers leverage the MongoDB Atlas platform for two primary reasons. First, it's about performance. Yile developers realized early in their success that even cloud relational databases (RDBMS) were challenging to scale horizontally. Early tests showed RDBMS could not achieve Yile's desired goal of having a 0.5s minimum game response time.
"Our team sought alternatives to find a database with much stronger horizontal scalability. After assessing the pros and cons of a variety of solutions on the market, we decided to build with MongoDB's document database," Mezi Wu, Research and Development Manager at Yile Technology, said.
The R&D team thought MongoDB was easy to use and supported by vast online resources, including discussion forums. It only took one month to move critical data back-end components, like player profiles, from RDBMS to MongoDB and eliminate game database performance issues.
The second is about operations. Wu said, "MongoDB Atlas frees us from the burden of basic operational maintenance and maximizes the use of our most valuable resources: our people."
That's why after using the self-managed MongoDB community version at first, Yile Technology moved to the cloud-managed version of MongoDB, MongoDB Atlas, to alleviate the maintenance and monitoring burden experienced by the R&D team after a game's launch. It's natural to overwatch the infrastructure after a new launch, but the finite engineering resources are best applied to optimizing the game and adding new features.
"Firstly, with support from the MongoDB team, we have gained a better understanding of MongoDB features and advantages and become more precise in our usage. Secondly, MongoDB Atlas provides an easy-to-use operation interface, which is faster and more convenient for database setup and can provide a high-availability architecture with zero downtime," says Yi-Zheng Lin, Senior Database Administrator.
Having acquired experience and confidence, now validated by rapid success, Yile Technology plans to expand its usage of MongoDB further. The company is interested in the MongoDB transaction features for its cash flow data and the MongoDB aggregation pipeline to analyze users' behavior.
## Beamable
Based in Boston, USA, Beamable is a company that streamlines game development and deployment for game developers. Beamable does that by providing a game server architecture that handles the very common needs of backend game developers, which offloads a sizable chunk of the development process, leaving more time to fine-tune game mechanics and stickiness.
Game data (also called game state) is a very important component in game development, but the operations and tools required to maximize its utilization and efficiency are almost as critical. Building such tools and processes can be daunting, especially for smaller up-and-coming game studios, no matter how talented.
For example, Beamable lets developers integrate, manage, and analyze their data with a web dashboard called LiveOps Portal so engineers don't have to build an expensive custom live games solution. That's only one of the many game backend aspects Beamable handles, so check the whole list on their features page.
Beamable's focus on integrating itself into the development workflow is one of the most crucial advantages of their offering, because every game developer wants to tweak things right in the game's editor --- for example, in Unity, for which Beamable's integration is impressive and complete.
To achieve such a feat, Beamable built the platform on top of MongoDB Atlas "from day one" according to Ali El Rhermoul (listen to the podcast ep. 151), and therefore started on a solid
and scalable developer data platform to innovate upon, leaving the database operations to MongoDB, while focusing on adding value to their customers. Beamable helps many developers, which translates into an enormous aggregated amount of data.
Additionally, MongoDB's document model works really well for games and that has been echoed many times in the games industry. Games have some of the most rapidly changing schemas, and some games offer new features, items, and rewards on a daily basis, if not hourly.
With Beamable, developers can easily add game features such as leaderboards, commerce offers, or even identity management systems that are GDPR-compatible. Beamable is so confident in its platform that developers can try for free with a solid feature set, and seamlessly upgrade to get active support or enterprise features.
## Bemyfriends
bemyfriends is a South Korean company that built a SaaS solution called b.stage, which lets creators, brands, talents, and IP holders connect with their fans in meaningful, agreeable, and effective ways, including monetization. bemyfriends is different from any other competitor because the creators are in control and own entirely all data created or acquired, even if they decide to leave.
With b.stage, creators have a dedicated place where they can communicate, monetize, and grow their businesses at their own pace, free from feed algorithms. There, they can nurture their fans into super fans. b.stage supports multiple languages (system and content) out of the box. However, membership, e-commerce, live-streaming, content archives, and even community features (including token-gated ones) are also built-in and integrated to single admin.
Built-in analytics tools and dashboards are available for in-depth analysis without requiring external tool integration. Creators can focus on their content and fans without worrying about complex technical implementations. That makes b.stage a powerful and straightforward fandom solution with high-profile creators, such as eSports teams T1, KT Rolster and Nongshim Redforce, three teams with millions of gamer fans in South Korea and across the world.
bemyfriends uses MongoDB as its primary data platform. June Kay Kim (CTO, bemyfriends) explained that engineers initially tested with an RDBMS solution but quickly realized that scaling a relational database at the required scale would be difficult. MongoDB's scalability and performance were crucial criteria in the data platform selection.
Additionally, MongoDB's flexible schema was an essential feature for the bemyfriends team. Their highly innovative product demands many different data schemas, and each can be subject to frequent modifications to integrate the latest features creators need.
While managing massive fandoms, downtime is not an option, so the ability to make schema modifications without incurring downtime was also a requirement for the data platform. For all these reasons, bemyfriends use MongoDB Atlas to power the vast majority of the data in their SaaS solution.
Building with the corporate slogan of "Whatever you make, we will help you make more of it!," bemyfriend has created a fantastic tool for fandom business, whether their fans are into music, movies, games, or a myriad of other things --- the sky's the limit. Creators can focus on their fandom, knowing the most crucial piece of their fandom business, the data, is truly theirs.
## Diagon
Diagon is a gaming company based in Lagos, Nigeria. They are building a hyper-casual social gaming platform called "CASUAL by Diagon" where users can access several games. There are about 10 games at the moment, and Diagon is currently working on developing and publishing more games on its platform, by working with new game developers currently joining the in-house team. The building of an internal game development team will be coming with the help of a fresh round of funding for the start-up (Diagon Pre-Seed Round).
The games are designed to be very easy to play so that more people can play games while having a break, waiting in line, or during other opportune times. Not only do players have the satisfaction of progressing and winning the games, but there's also a social component.
Diagon has a system of leaderboards to help the best players gain visibility within the community. At the same time, raffles make people more eager to participate, regardless of their gaming skills.
Diagon utilized MongoDB from the start, and one key defining factor was MongoDB's flexible schema. It means that the same collection ("table," in RDBMS lingo) can contain documents using multiple schemas, or schema versions, as long as the code can handle them. This flexibility allows game developers to quickly add properties or new data types without incurring downtime, thus accelerating the pace of innovation.
Diagon also runs on MongoDB Atlas, the MongoDB platform, which handles the DevOps aspect of the database, leaving developers to focus on making their games better. "Having data as objects is the future," says Jeremiah Onojah, Founder of and Product Developer at Diagon. And Diagion's engineers are just getting started: "I believe there's so much more to get out of MongoDB," he adds, noting that future apps are planned to run on MongoDB.
For example, an area of interest for Onojah is MongoDB Atlas Search, a powerful integrated Search feature, powered by Lucene. Atlas developers can tap into this very advanced search engine without having to integrate a third-party system, thanks to the unified MongoDB Query Language (MQL).
Diagon is growing fast and has a high retention rate of 20%. Currently, 80% of its user base comes from Nigeria, but the company already sees users coming from other locations, which demonstrates that growth could be worldwide. Diagon is one of the startups from the MongoDB Startup Program.
## Conclusion
MongoDB Atlas is an ideal developer data platform for game developers, whether you are a solo developer or working on AAA titles. Developers agree that MongoDB's data model helps them change their data layer quicker to match the desired outcome.
All the while, MondoDB Atlas enables their applications to reach global scale and high availability (99.995% SLA) without involving complex operations. Finally, the unique Atlas data services --- like full-text search, data lake, analytics workload, mobile sync, and Charts --- make it easy to extract insights from past and real-time data.
Create a free MongoDB Atlas cluster and start prototyping your next game back end. Listen to the gaming MongoDB podcast playlist to learn more about how other developers use MongoDB. If you are going to GDC 2023, come to our booth, talks, user group meetup, and events. They are all listed at mongodb.com/gdc.
Last and not least, if your startup uses MongoDB, our MongoDB startup program can help you reach the next level faster, with Atlas credits and access to MongoDB experts. | md | {
"tags": [
"Atlas"
],
"pageDescription": "This article highlights startups in the games industry that use MongoDB as a backend. Their teams describe why they chose MongoDB Atlas and how it makes their development more productive.",
"contentType": "Article"
} | MongoDB Atlas for Gaming, Startups to Watch in 2023 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/introducing-realm-flipper-plugin | created | # Technical Preview of a Realm Flipper Plugin
React Native is a framework built by many components, and often, there are multiple ways to do the same thing. Debugging is an example of that. React Native exposes the Chrome DevTools Protocol, and you are able to debug mobile apps using the Chrome browser. Moreover, if you are using MacOS, you can debug your app running on iOS using Safari.
Flipper is a new tool for mobile developers, and in particular, in the React Native community, it’s growing in popularity.
In the past, debugging a React Native app with Realm JavaScript has virtually been impossible. Switching to the new React Native architecture, it has been possible to switch from the Chrome debugger to Flipper by using the new JavaScript engine, Hermes.
Debugging is more than setting breakpoints and single stepping through code. Inspecting your database is just as important.
Flipper itself can be downloaded, or you can use Homebrew if you are a Mac user. The plugin is available for installation in the Flipper plugin manager and on npm for the mobile side.
Read more about getting started with the Realm Flipper plugin.
In the last two years, Realm has been investing in providing a better experience for React Native developers. Over the course of 10 weeks, a team of three interns investigated how Realm can increase developer productivity and enhance the developer experience by developing a Realm plugin for Flipper to inspect a Realm database.
The goal with the Realm Flipper Plugin is to offer a simple-to-use and powerful debugging tool for Realm databases. It enables you to explore and modify Realm directly from the user interface.
## Installation
The Flipper support consists of two components. First, you need to install the `flipper-realm-plugin` in the Flipper desktop application. You can find it in Flipper’s plugin manager — simply search for it by name.
Second, you have to add Flipper support to your React Native app. Add `realm-flipper-plugin-device` to your app’s dependencies, and add the component `` to your app’s source code (realms is an array of Realm instances).
Once you launch your app — on device or simulator — you can access your database from the Flipper desktop application.
## Features
Live objects are a key concept of Realm. Query results and individual objects will automatically be updated when the underlying database is changed. The Realm Flipper plugin supports live objects. This means whenever objects in a Realm change, it’s reflected in the plugin. This makes it easy to see what is happening inside an application. Data can either be filtered using Realm Query Language or explored directly in the table. Additionally, the plugin enables you to traverse linked objects inside the table or in a JSON view.
The schema tab shows an overview of the currently selected schema and its properties.
Schemas are not only presented in a table but also as a directed graph, which makes it even easier to see dependencies.
See a demonstration of our plugin.
## Looking ahead
Currently, our work on Hermes is only covered by pre-releases. In the near future, we will release version 11.0.0. The Realm Flipper plugin will hopefully prove to be a useful tool when you are debugging your React Native app once you switch to Hermes.
From the start, the plugin was split into two components. One component runs on the device, and the other runs on the desktop (inside the Flipper desktop application). This will make it possible to add a database inspector within an IDE such as VSCode.
## Relevant links
* Desktop plugin on npm
* Device plugin on npm
* GitHub repository
* Flipper download
* Flipper documentation
* Realm Node.js documentation
## Disclaimer
The Realm Flipper plugin is still in the early stage of development. We are putting it out to our community to get a better understanding of what their needs are.
The plugin is likely to change over time, and for now, we cannot commit to any promises regarding bug fixes or new features. As always, you are welcome to create pull requests and issues.
And don’t forget — if you have questions, comments, or feedback, we’d love to hear from you in the MongoDB Community Forums. | md | {
"tags": [
"JavaScript",
"Realm",
"React Native"
],
"pageDescription": "Click here for a brief introduction to the Realm Flipper plugin for React Native developers.",
"contentType": "Article"
} | Technical Preview of a Realm Flipper Plugin | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-federation-setup | created | # MongoDB Data Federation Setup
As an avid traveler, you have a love for staying at Airbnbs and have been keeping detailed notes about each one you’ve stayed in over the years. These notes are spread out across different storage locations, like MongoDB Atlas and AWS S3, making it a challenge to search for a specific Airbnb with the amenities your girlfriend desires for your upcoming Valentine’s Day trip. Luckily, there is a solution to make this process a lot easier. By using MongoDB’s Data Federation feature, you can combine all your data into one logical view and easily search for the perfect Airbnb without having to worry about where the data is stored. This way, you can make your Valentine’s Day trip perfect without wasting time searching through different databases and storage locations.
Don’t know how to utilize MongoDB’s Data Federation feature? This tutorial will guide you through exactly how to combine your Airbnb data together for easier query-ability.
## Tutorial Necessities
Before we jump in, there are a few necessities we need to have in order to be on the same page. This tutorial requires:
* MongoDB Atlas.
* An Amazon Web Services (AWS) account.
* Access to the AWS Management Console.
* AWS CLI.
* MongoDB Compass.
### Importing our sample data
Our first step is to import our Airbnb data into our Atlas cluster and our S3 bucket, so we have data to work with throughout this tutorial. Make sure to import the dataset into both of these storage locations.
### Importing via MongoDB Atlas
Step 1: Create a free tier shared cluster.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
Step 2: Once your cluster is set up, click the three ellipses and click “Load Sample Dataset."
Step 3: Once you get this green message you’ll know your sample dataset (Airbnb notes) is properly loaded into your cluster.
### Importing via AWS S3
Step 1: We will be using this sample data set. Please download it locally. It contains the sample data we are working with along with the S3 bucket structure necessary for this demo.
Step 2: Once the data set is downloaded, access your AWS Management Console and navigate to their S3 service.
Step 3: Hit the button “Create Bucket” and follow the instructions to create your bucket and upload the sampledata.zip.
Step 4: Make sure to unzip your file before uploading the folders into S3.
Step 5: Once your data is loaded into the bucket, you will see several folders, each with varying data types.
Step 6: Follow the path: Amazon S3 > Buckets > atlas-data-federation-demo > json/ > airbnb/ to view your Airbnb notes. Your bucket structure should look like this:
Congratulations! You have successfully uploaded your extensive Airbnb notes in not one but two storage locations. Now, let’s see how to retrieve this information in one location using Data Federation so we can find the perfect Airbnb. In order to do so, we need to get comfortable with the MongoDB Atlas Data Federation console.
## Connecting MongoDB Atlas to S3
Inside the MongoDB Atlas console, on the left side, click on Data Federation.
Here, click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI. This will lead us to a page where we can add in our data sources. You can rename your Federated Database Instance to anything you like. Once you save it, you will not be able to change the name.
Let’s add in our data sources from our cluster and our bucket!
### Adding in data source via AWS S3 Bucket:
Step 1: Click on “Add Data Source.”
Step 2: Select the “Amazon S3” button and hit “Next.”
Step 3: From here, click Next on the “Authorize an AWS IAM Role”:
Step 4: Click on “Create New Role in the AWS CLI”:
Step 5: Now, you’re going to want to make sure you have AWS CLI configured on your laptop.
Step 6: Follow the steps below the “Create New Role with the AWS CLI” in your AWS CLI.
```
aws iam create-role \
--role-name datafederation \
--assume-role-policy-document file://role-trust-policy.json
```
Step 7: You can find your “ARN” directly in your terminal. Copy that in — it should look like this:
```
arn:aws:iam::7***************:role/datafederation
```
Step 8: Enter the bucket name containing your Airbnb notes:
Step 9: Follow the instructions in Atlas and save your policy role.
Step 10: Copy the CLI commands listed on the screen and paste them into your terminal like so:
```
aws iam put-role-policy \
--role-name datafederation \
--policy-name datafederation-policy \
--policy-document file://adl-s3-policy.json
```
Step 11: Access your AWS Console, locate your listingsAndReviews.json file located in your S3 bucket, and copy the S3 URI.
Step 12: Enter it back into your “Define ‘Data Sources’ Using Paths Inside Your S3” screen and change each step of the tree to “static.”
Step 13: Drag your file from the left side of the screen to the middle where it says, “Drag the dataset to your Federated Database.” Following these steps correctly will result in a page similar to the screenshot below.
You have successfully added in your Airbnb notes from your S3 bucket. Nice job. Let's do the same thing for the notes saved in our Atlas cluster.
### Adding in data source via MongoDB Atlas cluster
Step 1: Click “Add Data Sources.”
Step 2: Select “MongoDB Atlas Cluster” and provide the cluster name along with our sample_airbnb collection. These are your Atlas Airbnb notes.
Step 3: Click “Next” and your sample_airbnb.listingsAndReviews will appear in the left-hand side of the console.
Step 4: Drag it directly under your Airbnb notes from your S3 bucket and hit “Save.” Your console should look like this when done:
Great job. You have successfully imported your Airbnb notes from both your S3 bucket and your Atlas cluster into one location. Let’s connect to our Federated Database and see our data combined in one easily query-able location.
## Connect to your federated database
We are going to connect to our Federated Database using MongoDB Compass.
Step 1: Click the green “Connect” button and then select “Connect using MongoDB Compass.”
Step 2: Copy in the connection string, making sure to switch out the user and password for your own. This user must have admin access in order to access the data.
Step 3: Once you’re connected to Compass, click on “VirtualDatabase0” and once more on “VirtualCollection0.”
Amazing job. You can now look at all your Airbnb notes in one location!
## Conclusion
In this tutorial, we have successfully stored your Airbnb data in various storage locations, combined these separate data sets into one via Data Federation, and successfully accessed our data back through MongoDB Compass. Now you can look for and book the perfect Airbnb for your trip in a fraction of the time. | md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "This tutorial will guide you through exactly how to combine your Airbnb data together for easier query-ability. ",
"contentType": "Tutorial"
} | MongoDB Data Federation Setup | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-java | created | # Using Atlas Search from Java
Dear fellow developer, welcome!
Atlas Search is a full-text search engine embedded in MongoDB Atlas that gives you a seamless, scalable experience for building relevance-based app features. Built on Apache Lucene, Atlas Search eliminates the need to run a separate search system alongside your database. The gateway to Atlas Search is the `$search` aggregation pipeline stage.
The $search stage, as one of the newest members of the MongoDB aggregation pipeline family, has gotten native, convenient support added to various language drivers. Driver support helps developers build concise and readable code. This article delves into using the Atlas Search support built into the MongoDB Java driver, where we’ll see how to use the driver, how to handle `$search` features that don’t yet have native driver convenience methods or have been released after the driver was released, and a glimpse into Atlas Search relevancy scoring. Let’s get started!
## New to search?
Full-text search is a deceptively sophisticated set of concepts and technologies. From the user perspective, it’s simple: good ol’ `?q=query` on your web applications URL and relevant documents are returned, magically. There’s a lot behind the classic magnifying glass search box, from analyzers, synonyms, fuzzy operators, and facets to autocomplete, relevancy tuning, and beyond. We know it’s a lot to digest. Atlas Search works hard to make things easier and easier for developers, so rest assured you’re in the most comfortable place to begin your journey into the joys and power of full-text search. We admittedly gloss over details here in this article, so that you get up and running with something immediately graspable and useful to you, fellow Java developers. By following along with the basic example provided here, you’ll have the framework to experiment and learn more about details elided.
## Setting up our Atlas environment
We need two things to get started, a database and data. We’ve got you covered with both. First, start with logging into your Atlas account. If you don’t already have an Atlas account, follow the steps for the Atlas UI in the “Get Started with Atlas” tutorial.
### Opening network access
If you already had an Atlas account or perhaps like me, you skimmed the tutorial too quickly and skipped the step to add your IP address to the list of trusted IP addresses, take care of that now. Atlas only allows access to the IP addresses and users that you have configured but is otherwise restricted.
### Indexing sample data
Now that you’re logged into your Atlas account, add the sample datasets to your environment. Specifically, we are using the sample_mflix collection here. Once you’ve added the sample data, turn Atlas Search on for that collection by navigating to the Search section in the Databases view, and clicking “Create Search Index.”
Once in the “Create Index” wizard, use the Visual Editor, pick the sample_mflix.movies collection, leave the index name as “default”, and finally, click “Create Search Index.”
It’ll take a few minutes for the search index to be built, after which an e-mail notification will be sent. The indexing processing status can be tracked in the UI, as well.
Here’s what the Search section should now look like for you:
Voila, now you’ve got the movie data indexed into Atlas Search and can perform sophisticated full text queries against it. Go ahead and give it a try using the handy Search Tester, by clicking the “Query” button. Try typing in some of your favorite movie titles or actor names, or even words that would appear in the plot or genre.
Behind the scenes of the Search Tester lurks the $search pipeline stage. Clicking “Edit $search Query” exposes the full $search stage in all its JSON glory, allowing you to experiment with the syntax and behavior.
This is our first glimpse into the $search syntax. The handy “copy” (the top right of the code editor side panel) button copies the code to your clipboard so you can paste it into your favorite MongoDB aggregation pipeline tools like Compass, MongoDB shell, or the Atlas UI aggregation tool (shown below). There’s an “aggregation pipeline” link there that will link you directly to the aggregation tool on the current collection.
At this point, your environment is set up and your collection is Atlas search-able. Now it’s time to do some coding!
## Click, click, click, … code!
Let’s first take a moment to reflect on and appreciate what’s happened behind the scenes of our wizard clicks up to this point:
* A managed, scalable, reliable MongoDB cluster has spun up.
* Many sample data collections were ingested, including the movies database used here.
* A triple-replicated, flexible, full-text index has been configured and built from existing content and stays in sync with database changes.
Through the Atlas UI and other tools like MongoDB Compass, we are now able to query our movies collection in, of course, all the usual MongoDB ways, and also through a proven and performant full-text index with relevancy-ranked results. It’s now up to us, fellow developers, to take it across the finish line and build the applications that allow and facilitate the most useful or interesting documents to percolate to the top. And in this case, we’re on a mission to build Java code to search our Atlas Search index.
## Our coding project challenge
Let’s answer this question from our movies data:
> What romantic, drama movies have featured Keanu Reeves?
Yes, we could answer this particular question knowing the precise case and spelling of each field value in a direct lookup fashion, using this aggregation pipeline:
{
$match: {
cast: {
$in: ["Keanu Reeves"],
},
genres: {
$all: ["Drama", "Romance"],
},
},
}
]
Let’s suppose we have a UI that allows the user to select one or more genres to filter, and a text box to type in a free form query (see the resources at the end for a site like this). If the user had typed “keanu reeves”, all lowercase, the above $match would not find any movies. Doing known, exact value matching is an important and necessary capability, to be sure, yet when presenting free form query interfaces to humans, we need to allow for typos, case insensitivity, voice transcription mistakes, and other inexact, fuzzy queries.
![using $match with lowercase “keanu reeves”, no matches!
Using the Atlas Search index we’ve already set up, we can now easily handle a variety of full text queries. We’ll stick with this example throughout so you can compare and contrast doing standard $match queries to doing sophisticated $search queries.
## Know the $search structure
Ultimately, regardless of the coding language, environment, or driver that we use, a BSON representation of our aggregation pipeline request is handled by the server. The Aggregation view in Atlas UI and very similarly in Compass, our useful MongoDB client-side UI for querying and analyzing MongoDB data, can help guide you through the syntax, with links directly to the pertinent Atlas Search aggregation pipeline documentation.
Rather than incrementally building up to our final example, here’s the complete aggregation pipeline so you have it available as we adapt this to Java code. This aggregation pipeline performs a search query, filtering results to movies that are categorized as both Drama and Romance genres, that have “keanu reeves” in the cast field, returning only a few fields of the highest ranked first 10 documents.
{
"$search": {
"compound": {
"filter": [
{
"compound": {
"must": [
{
"text": {
"query": "Drama",
"path": "genres"
}
},
{
"text": {
"query": "Romance",
"path": "genres"
}
}
]
}
}
],
"must": [
{
"phrase": {
"query": "keanu reeves",
"path": {
"value": "cast"
}
}
}
]
},
"scoreDetails": true
}
},
{
"$project": {
"_id": 0,
"title": 1,
"cast": 1,
"genres": 1,
"score": {
"$meta": "searchScore"
},
"scoreDetails": {
"$meta": "searchScoreDetails"
}
}
},
{
"$limit": 10
}
]
At this point, go ahead and copy the above JSON aggregation pipeline and paste it into Atlas UI or Compass. There’s a nifty feature (the "</> TEXT" mode toggle) where you can paste in the entire JSON just copied. Here’s what the results should look like for you:
![three-stage aggregation pipeline in Compass
As we adapt the three-stage aggregation pipeline to Java, we’ll explain things in more detail.
We spend the time here emphasizing this JSON-like structure because it will help us in our Java coding. It’ll serve us well to also be able to work with this syntax in ad hoc tools like Compass in order to experiment with various combinations of options and stages to arrive at what serves our applications best, and be able to translate that aggregation pipeline to Java code. It’s also the most commonly documented query language/syntax for MongoDB and Atlas Search; it’s valuable to be savvy with it.
## Now back to your regularly scheduled Java
Version 4.7 of the MongoDB Java driver was released in July of last year (2022), adding convenience methods for the Atlas `$search` stage, while Atlas Search was made generally available two years prior. In that time, Java developers weren’t out of luck, as direct BSON Document API calls to construct a $search stage work fine. Code examples in that time frame used `new Document("$search",...)`. This article showcases a more comfortable way for us Java developers to use the `$search` stage, allowing clearly named and strongly typed parameters to guide you. Your IDE’s method and parameter autocompletion will be a time-saver to more readable and reliable code.
There’s a great tutorial on using the MongoDB Java driver in general.
The full code for this tutorial is available on GitHub.
You’ll need a modern version of Java, something like:
$ java --version
openjdk 17.0.7 2023-04-18
OpenJDK Runtime Environment Homebrew (build 17.0.7+0)
OpenJDK 64-Bit Server VM Homebrew (build 17.0.7+0, mixed mode, sharing)
Now grab the code from our repository using `git clone` and go to the working directory:
git clone https://github.com/mongodb-developer/getting-started-search-java
cd getting-started-search-java
Once you clone that code, copy the connection string from the Atlas UI (the “Connect” button on the Database page). You’ll use this connection string in a moment to run the code connecting to your cluster.
Now open a command-line prompt to the directory where you placed the code, and run:
ATLAS_URI="<>" ./gradlew run
Be sure to fill in the appropriate username and password in the connection string. If you don’t already have Gradle installed, the `gradlew` command should install it the first time it is executed. At this point, you should get a few pages of flurry of output to your console. If the process hangs for a few seconds and then times out with an error message, check your Atlas network permissions, the connection string you have specified the `ATLAS_URI` setting, including the username and password.
Using the `run` command from Gradle is a convenient way to run the Java `main()` of our `FirstSearchExample`. It can be run in other ways as well, such as through an IDE. Just be sure to set the `ATLAS_URI` environment variable for the environment running the code.
Ideally, at this point, the code ran successfully, performing the search query that we have been describing, printing out these results:
Sweet November
Cast: Keanu Reeves, Charlize Theron, Jason Isaacs, Greg Germann]
Genres: [Drama, Romance]
Score:6.011996746063232
Something's Gotta Give
Cast: [Jack Nicholson, Diane Keaton, Keanu Reeves, Frances McDormand]
Genres: [Comedy, Drama, Romance]
Score:6.011996746063232
A Walk in the Clouds
Cast: [Keanu Reeves, Aitana Sènchez-Gijèn, Anthony Quinn, Giancarlo Giannini]
Genres: [Drama, Romance]
Score:5.7239227294921875
The Lake House
Cast: [Keanu Reeves, Sandra Bullock, Christopher Plummer, Ebon Moss-Bachrach]
Genres: [Drama, Fantasy, Romance]
Score:5.7239227294921875
So there are four movies that match our criteria — our initial mission has been accomplished.
## Java $search building
Let’s now go through our project and code, pointing out the important pieces you will be using in your own project. First, our `build.gradle` file specifies that our project depends on the MongoDB Java driver, down to the specific version of the driver. There’s also a convenient `application` plugin so that we can use the `run` target as we just did.
plugins {
id 'java'
id 'application'
}
group 'com.mongodb.atlas'
version '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
implementation 'org.mongodb:mongodb-driver-sync:4.10.1'
implementation 'org.apache.logging.log4j:log4j-slf4j-impl:2.17.1'
}
application {
mainClass = 'com.mongodb.atlas.FirstSearchExample'
}
See [our docs for further details on how to add the MongoDB Java driver to your project.
In typical Gradle project structure, our Java code resides under `src/main/java/com/mongodb/atlas/` in FirstSearchExample.java.
Let’s walk through this code, section by section, in a little bit backward order. First, we open a connection to our collection, pulling the connection string from the `ATLAS_URI` environment variable:
// Set ATLAS_URI in your environment
String uri = System.getenv("ATLAS_URI");
if (uri == null) {
throw new Exception("ATLAS_URI must be specified");
}
MongoClient mongoClient = MongoClients.create(uri);
MongoDatabase database = mongoClient.getDatabase("sample_mflix");
MongoCollection collection = database.getCollection("movies");
Our ultimate goal is to call `collection.aggregate()` with our list of pipeline stages: search, project, and limit. There are driver convenience methods in `com.mongodb.client.model.Aggregates` for each of these.
AggregateIterable aggregationResults = collection.aggregate(Arrays.asList(
searchStage,
project(fields(excludeId(),
include("title", "cast", "genres"),
metaSearchScore("score"),
meta("scoreDetails", "searchScoreDetails"))),
limit(10)));
The `$project` and `$limit` stages are both specified fully inline above. We’ll define `searchStage` in a moment. The `project` stage uses `metaSearchScore`, a Java driver convenience method, to map the Atlas Search computed score (more on this below) to a pseudo-field named `score`. Additionally, Atlas Search can provide the score explanations, which itself is a performance hit to generate so only use for debugging and experimentation. Score explanation details must be requested as an option on the `search` stage for them to be available for projection here. There is not a convenience method for projecting scoring explanations, so we use the generic `meta()` method to provide the pseudo-field name and the key of the meta value Atlas Search returns for each document. The Java code above generates the following aggregation pipeline, which we had previously done manually above, showing it here to show the Java code and the corresponding generated aggregation pipeline pieces.
{
"$search": { ... }
},
{
"$project": {
"_id": 0,
"title": 1,
"cast": 1,
"genres": 1,
"score": {
"$meta": "searchScore"
},
"scoreDetails": {
"$meta": "searchScoreDetails"
}
}
},
{
"$limit": 10
}
]
The `searchStage` consists of a search operator and an additional option. We want the relevancy scoring explanation details of each document generated and returned, which is enabled by the `scoreDetails` setting that was developed and released after the Java driver version was released. Thankfully, the Java driver team built in pass-through capabilities to be able to set arbitrary options beyond the built-in ones to future-proof it. `SearchOptions.searchOptions().option()` allows us to set the `scoreDetails` option on the `$search` stage to true. Reiterating the note from above, generating score details is a performance hit on Lucene, so only enable this setting for debugging or experimentation while inspecting but do not enable it in performance sensitive environments.
Bson searchStage = search(
compound()
.filter(List.of(genresClause))
.must(List.of(SearchOperator.of(searchQuery))),
searchOptions().option("scoreDetails", true)
);
That code builds this structure:
"$search": {
"compound": {
"filter": [ . . . ],
"must": [ . . . ]
},
"scoreDetails": true
}
We’ve left a couple of variables to fill in: `filters` and `searchQuery`.
> What are filters versus other compound operator clauses?
> * `filter`: clauses to narrow the query scope, not affecting the resultant relevancy score
> * `must`: required query clauses, affecting relevancy scores
> * `should`: optional query clauses, affecting relevancy scores
> * `mustNot`: clauses that must not match
Our (non-scoring) filter is a single search operator clause that combines required criteria for genres Drama and Romance:
SearchOperator genresClause = SearchOperator.compound()
.must(Arrays.asList(
SearchOperator.text(fieldPath("genres"),"Drama"),
SearchOperator.text(fieldPath("genres"), "Romance")
));
And that code builds this query operator structure:
"compound": {
"must": [
{
"text": {
"query": "Drama",
"path": "genres"
}
},
{
"text": {
"query": "Romance",
"path": "genres"
}
}
]
}
Notice how we nested the `genresClause` within our `filter` array, which takes a list of `SearchOperator`s. `SearchOperator` is a Java driver class with convenience builder methods for some, but not all, of the available Atlas Search search operators. You can see we used `SearchOperator.text()` to build up the genres clauses.
Last but not least is the primary (scoring!) `phrase` search operator clause to search for “keanu reeves” within the `cast` field. Alas, this is one search operator that currently does not have built-in `SearchOperator` support. Again, kudos to the Java driver development team for building in a pass-through for arbitrary BSON objects, provided we know the correct JSON syntax. Using `SearchOperator.of()`, we create an arbitrary operator out of a BSON document. Note: This is why it was emphasized early on to become savvy with the JSON structure of the aggregation pipeline syntax.
Document searchQuery = new Document("phrase",
new Document("query", "keanu reeves")
.append("path", "cast"));
## And the results are…
So now we’ve built the aggregation pipeline. To show the results (shown earlier), we simply iterate through `aggregationResults`:
aggregationResults.forEach(doc -> {
System.out.println(doc.get("title"));
System.out.println(" Cast: " + doc.get("cast"));
System.out.println(" Genres: " + doc.get("genres"));
System.out.println(" Score:" + doc.get("score"));
// printScoreDetails(2, doc.toBsonDocument().getDocument("scoreDetails"));
System.out.println("");
});
The results are ordered in descending score order. Score is a numeric factor based on the relationship between the query and each document. In this case, the only scoring component to our query was a phrase query of “keanu reeves”. Curiously, our results have documents with different scores! Why is that? If we covered everything, this article would never end, so addressing the scoring differences is beyond this scope, but we’ll explain a bit below for bonus and future material.
## Conclusion
You’re now an Atlas Search-savvy Java developer — well done! You’re well on your way to enhancing your applications with the power of full-text search. With just the steps and code presented here, even without additional configuration and deeper search understanding, the power of search is available to you.
This is only the beginning. And it is important, as we refine our application to meet our users’ demanding relevancy needs, to continue the Atlas Search learning journey.
### For further information
We finish our code with some insightful diagnostic output. An aggregation pipeline execution can be [*explain*ed, dumping details of execution plans and performance timings. In addition, the Atlas Search process, `mongot`, provides details of `$search` stage interpretation and statistics.
System.out.println("Explain:");
System.out.println(format(aggregationResults.explain().toBsonDocument()));
We’ll leave delving into those details as an exercise to the reader, noting that you can learn a lot about how queries are interpreted/analyzed by studying the explain() output.
## Bonus section: relevancy scoring
Search relevancy is a scientific art. Without getting into mathematical equations and detailed descriptions of information retrieval research, let’s focus on the concrete scoring situation presented in our application here. The scoring component of our query is a phrase query of “keanu reeves” on the cast field. We do a `phrase` query rather than a `text` query so that we search for those two words contiguously, rather than “keanu OR reeves” (“keanu” is a rare term, of course, but there are many “reeves”).
Scoring takes into account the field length (the number of terms/words in the content), among other factors. Underneath, during indexing, each value of the cast field is run through an analysis process that tokenizes the text. Tokenization is a process splitting the content into searchable units, called terms. A “term” could be a word or fragment of a word, or the exact text, depending on the analyzer settings. Take a look at the `cast` field values in the returned movies. Using the default, `lucene.standard`, analyzer, the tokens emitted split at whitespace and other word boundaries, such as the dash character.
Now do you see how the field length (number of terms) varies between the documents? If you’re curious of the even gnarlier details of how Lucene performs the scoring for our query, uncomment the `printScoreDetails` code in our results output loop.
Don’t worry if this section is a bit too much to take in right now. Stay tuned — we’ve got some scoring explanation content coming shortly.
We could quick fix the ordering to at least not bias based on the absence of hyphenated actor names. Moving the queryClause into the `filters` section, rather than the `must` section, such that there would be no scoring clauses, only filtering ones, will leave all documents of equal ranking.
## Searching for more?
There are many useful Atlas Search resources available, several linked inline above; we encourage you to click through those to delve deeper. These quick three steps will have you up and searching quickly:
1. Create an Atlas account
2. Add some content
3. Create an Atlas Search index
Please also consider taking the free MongoDB University Atlas Search course.
And finally, we’ll leave you with the slick demonstration of Atlas Search on the movies collection at https://www.atlassearchmovies.com/ (though note that it fuzzily searches all searchable text fields, not just the cast field, and does so with OR logic querying, which is different than the `phrase` query only on the `cast` field we performed here). | md | {
"tags": [
"Atlas",
"Java"
],
"pageDescription": "This article delves into using the Atlas Search support built into the MongoDB Java driver",
"contentType": "Article"
} | Using Atlas Search from Java | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/amazon-sagemaker-and-mongodb-vector-search-part-1 | created | # Part #1: Build Your Own Vector Search with MongoDB Atlas and Amazon SageMaker
Have you heard about machine learning, models, and AI but don't quite know where to start? Do you want to search your data semantically? Are you interested in using vector search in your application?
Then you’ve come to the right place!
This series will introduce you to MongoDB Atlas Vector Search and Amazon SageMaker, and how to use both together to semantically search your data.
This first part of the series will focus on the architecture of such an application — i.e., the parts you need, how they are connected, and what they do.
The following parts of the series will then dive into the details of how the individual elements presented in this architecture work (Amazon SageMaker in Part 2 and MongoDB Atlas Vector Search in Part 3) and their actual configuration and implementation. If you are just interested in one of these two implementations, have a quick look at the architecture pictures and then head to the corresponding part of the series. But to get a deep understanding of Vector Search, I recommend reading the full series.
Let’s start with why though: Why should you use MongoDB Atlas Vector Search and Amazon SageMaker?
## Components of your application
In machine learning, an embedding model is a type of model that learns to represent objects — such as words, sentences, or even entire documents — as vectors in a high-dimensional space. These vectors, called embeddings, capture semantic relationships between the objects.
On the other hand, a large language model, which is a term you might have heard of, is designed to understand and generate human-like text. It learns patterns and relationships within language by processing vast amounts of text data. While it also generates embeddings as an internal representation, the primary goal is to understand and generate coherent text.
Embedding models are often used in tasks like natural language processing (NLP), where understanding semantic relationships is crucial. For example, word embeddings can be used to find similarities between words based on their contextual usage.
In summary, embedding models focus on representing objects in a meaningful way in a vector space, while large language models are more versatile, handling a wide range of language-related tasks by understanding and generating text.
For our needs in this application, an embedding model is sufficient. In particular, we will be using All MiniLM L6 v2 by Hugging Face.
Amazon SageMaker isn't just another AWS service; it's a versatile platform designed by developers, for developers. It empowers us to take control of our machine learning projects with ease. Unlike traditional ML frameworks, SageMaker simplifies the entire ML lifecycle, from data preprocessing to model deployment. As software engineers, we value efficiency, and SageMaker delivers precisely that, allowing us to focus more on crafting intelligent models and less on infrastructure management. It provides a wealth of pre-built algorithms, making it accessible even for those not deep into the machine learning field.
MongoDB Atlas Vector Search is a game-changer for developers like us who appreciate the power of simplicity and efficiency in database operations. Instead of sifting through complex queries and extensive code, Atlas Vector Search provides an intuitive and straightforward way to implement vector-based search functionality. As software engineers, we know how crucial it is to enhance the user experience with lightning-fast and accurate search results. This technology leverages the benefits of advanced vector indexing techniques, making it ideal for projects involving recommendation engines, content similarity, or even gaming-related features. With MongoDB Atlas Vector Search, we can seamlessly integrate vector data into our applications, significantly reducing development time and effort. It's a developer's dream come true – practical, efficient, and designed to make our lives easier in the ever-evolving world of software development.
## Generating and updating embeddings for your data
There are two steps to using Vector Search in your application.
The first step is to actually create vectors (also called embeddings or embedding vectors), as well as update them whenever your data changes. The easiest way to watch for newly inserted and updated data from your server application is to use MongoDB Atlas triggers and watch for exactly those two events. The triggers themselves are out of the scope of this tutorial but you can find other great resources about how to set them up in Developer Center.
The trigger then executes a script that creates new vectors. This can, for example, be done via MongoDB Atlas Functions or as in this diagram, using AWS Lambda. The script itself then uses the Amazon SageMaker endpoint with your desired model deployed via the REST API to create or update a vector in your Atlas database.
The important bit here that makes the usage so easy and the performance so great is that the data and the embeddings are saved inside the same database:
> Data that belongs together gets saved together.
How to deploy and prepare this SageMaker endpoint and offer it as a REST service for your application will be discussed in detail in Part 2 of this tutorial.
## Querying your data
The other half of your application will be responsible for taking in queries to semantically search your data.
Note that a search has to be done using the vectorized version of the query. And the vectorization has to be done with the same model that we used to vectorize the data itself. The same Amazon SageMaker endpoint can, of course, be used for that.
Therefore, whenever a client application sends a request to the server application, two things have to happen.
1. The server application needs to call the REST service that provides the Amazon SageMaker endpoint (see the previous section).
2. With the vector received, the server application then needs to execute a search using Vector Search to retrieve the results from the database.
The implementation of how to query Atlas can be found in Part 3 of this tutorial.
## Wrapping it up
This short, first part of the series has provided you with an overview of a possible architecture to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.
Have a look at Part 2 if you are interested in how to set up Amazon SageMaker and Part 3 to go into detail about MongoDB Atlas Vector Search.
✅ Sign-up for a free cluster.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
✅ Get help on our Community Forums.
| md | {
"tags": [
"Atlas",
"Python",
"AI",
"Serverless",
"AWS"
],
"pageDescription": "In this series, we look at how to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.",
"contentType": "Tutorial"
} | Part #1: Build Your Own Vector Search with MongoDB Atlas and Amazon SageMaker | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/searching-nearby-points-interest-mapbox | created | # Searching for Nearby Points of Interest with MongoDB and Mapbox
When it comes to location data, MongoDB's ability to work with GeoJSON through geospatial queries is often under-appreciated. Being able to query for intersecting or nearby coordinates while maintaining performance is functionality a lot of organizations are looking for.
Take the example of maintaining a list of business locations or even a fleet of vehicles. Knowing where these locations are, relative to a particular position isn't an easy task when doing it manually.
In this tutorial we're going to explore the `$near` operator within a MongoDB Realm application to find stored points of interest within a particular proximity to a position. These points of interest will be rendered on a map using the Mapbox service.
To get a better idea of what we're going to accomplish, take the following animated image for example:
We're going to pre-load our MongoDB database with a few points of interest that are formatted using the GeoJSON specification. When clicking around on the map, we're going to use the `$near` operator to find new points of interest that are within range of the marker.
## The Requirements
There are numerous components that must be accounted for to be successful with this tutorial:
- A MongoDB Atlas free tier cluster or better to store the data.
- A MongoDB Realm application to access the data from a client-facing application.
- A Mapbox free tier account or better to render the data on a map.
The assumption is that MongoDB Atlas has been properly configured and that MongoDB Realm is using the MongoDB Atlas cluster.
>MongoDB Atlas can be used for FREE with a M0 sized cluster. Deploy MongoDB in minutes within the MongoDB Cloud.
In addition to Realm being pointed at the Atlas cluster, anonymous authentication for the Realm application should be enabled and an access rule should be defined for the collection. All users should be able to read all documents for this tutorial.
In this example, Mapbox is a third-party service for showing interactive map tiles. An account is necessary and an access token to be used for development should be obtained. You can learn how in the Mapbox documentation.
## MongoDB Geospatial Queries and the GeoJSON Data Model
Before diving into geospatial queries and creating an interactive client-facing application, a moment should be taken to understand the data and indexes that must be created within MongoDB.
Take the following example document:
``` json
{
"_id": "5ec6fec2318d26b626d53c61",
"name": "WorkVine209",
"location": {
"type": "Point",
"coordinates":
-121.4123,
37.7621
]
}
}
```
Let's assume that documents that follow the above data model exist in a **location_services** database and a **points_of_interest** collection.
To be successful with our queries, we only need to store the location type and the coordinates. This `location` field makes up a [GeoJSON feature, which follows a specific format. The `name` field, while useful isn't an absolute requirement. Some other optional fields might include an `address` field, `hours_of_operation`, or similar.
Before being able to execute the geospatial queries that we want, we need to create a special index.
The following index should be created:
``` none
db.points_of_interest.createIndex({ location: "2dsphere" });
```
The above index can be created numerous ways, for example, you can create it using the MongoDB shell, Atlas, Compass, and a few other ways. Just note that the `location` field is being classified as a `2dsphere` for the index.
With the index created, we can execute a query like the following:
``` none
db.points_of_interest.find({
"location": {
"$near": {
"$geometry": {
"type": "Point",
"coordinates": -121.4252, 37.7397]
},
"$maxDistance": 2500
}
}
});
```
Notice in the above example, we're looking for documents that have a `location` field within 2,500 meters of the point provided in the filter.
With an idea of how the data looks and how the data can be accessed, let's work towards creating a functional application.
## Interacting with Places using MongoDB Realm and Mapbox
Like previously mentioned, you should already have a Mapbox account and MongoDB Realm should already be configured.
On your computer, create an **index.html** file with the following boilerplate code:
``` xml
```
In the above code, we're including both the Mapbox library as well as the MongoDB Realm SDK. We're creating a `map` placeholder component which will show our map, and it is lightly styled with CSS.
You can run this file locally, serve it, or host it on [MongoDB Realm.
Within the ` | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to use the $near operator in a MongoDB geospatial query to find nearby points of interest.",
"contentType": "Tutorial"
} | Searching for Nearby Points of Interest with MongoDB and Mapbox | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/bash/get-started-atlas-aws-cloudformation | created | # Get Started with MongoDB Atlas and AWS CloudFormation
It's pretty amazing that we can now deploy and control massive systems
in the cloud from our laptops and phones. And it's so easy to take for
granted when it all works, but not so awesome when everything is broken
after coming back on Monday morning after a long weekend! On top of
that, the tooling that's available is constantly changing and updating
and soon you are drowning in dependabot PRs.
The reality of setting up and configuring all the tools necessary to
deploy an app is time-consuming, error-prone, and can result in security
risks if you're not careful. These are just a few of the reasons we've
all witnessed the incredible growth of DevOps tooling as we continue the
evolution to and in the cloud.
AWS CloudFormation is an
infrastructure-as-code (IaC) service that helps you model and set up
your Amazon Web Services resources so that you can spend less time
managing those resources and more time focusing on your applications
that run in AWS. CloudFormation, or CFN, let's users create and manage
AWS resources directly from templates which provide dependable out of
the box blueprint deployments for any kind of cloud app.
To better serve customers using modern cloud-native workflows, MongoDB
Atlas supports native CFN templates with a new set of Resource Types.
This new integration allows you to manage complete MongoDB Atlas
deployments through the AWS CloudFormation console and CLI so your apps
can securely consume data services with full AWS cloud ecosystem
support.
## Launch a MongoDB Atlas Stack on AWS CloudFormation
We created a helper project that walks you through an end-to-end example
of setting up and launching a MongoDB Atlas stack in AWS CloudFormation.
The
get-started-aws-cfn
project builds out a complete MongoDB Atlas deployment, which includes a
MongoDB Atlas project, cluster, AWS IAM role-type database user, and IP
access list entry.
>
>
>You can also use the AWS Quick Start for MongoDB
>Atlas
>that uses the same resources for CloudFormation and includes network
>peering to a new or existing VPC.
>
>
You're most likely already set up to run the
get-started-aws-cfn
since the project uses common tools like the AWS CLI and Docker, but
just in case, head over to the
prerequisites
section to check your development machine. (If you haven't already,
you'll want to create a MongoDB Atlas
account.)
The project has two main parts: "get-setup' will deploy and configure
the MongoDB Atlas CloudFormation resources into the AWS region of your
choice, while "get-started' will launch your complete Atlas deployment.
## Step 1) Get Set Up
Clone the
get-started-aws-cfn
repo and get
setup:
``` bash
git clone https://github.com/mongodb-developer/get-started-aws-cfn
cd get-started-aws-cfn
./get-setup.sh
```
## Step 2) Get Started
Run the get-started script:
``` bash
./get-started.sh
```
Once the stack is launched, you will start to see resources getting
created in the AWS CloudFormation console. The Atlas cluster takes a few
minutes to spin up completely, and you can track the progress in the
console or through the AWS CLI.
## Step 3) Get Connected
Once your MongoDB Atlas cluster is deployed successfully, you can find
its connection string under the Outputs tab as the value for the
"ClusterSrvAddress' key.
The Get-Started project also has a helper script to combine the AWS and
MongoDB shells to securely connect via an AWS IAM role session. Check
out connecting to your
cluster
for more information.
What next? You can connect to your MongoDB Atlas cluster from the mongo
shell, MongoDB
Compass, or any of
our supported
drivers. We have
guides for those using Atlas with popular languages: Here's one for how
to connect to Atlas with
Node.js
and another for getting started with
Java.
## Conclusion
Use the MongoDB Atlas CloudFormation Resources to power everything from
the most basic "hello-world' apps to the most advanced devops pipelines.
Jump start your new projects today with the MongoDB Atlas AWS
CloudFormation Get-Started
project!
>
>
>If you have questions, please head to our developer community
>website where the MongoDB engineers and
>the MongoDB community will help you build your next big idea with
>MongoDB.
>
>
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace. | md | {
"tags": [
"Bash",
"Atlas",
"AWS"
],
"pageDescription": "Learn how to get started with MongoDB Atlas and AWS CloudFormation.",
"contentType": "Code Example"
} | Get Started with MongoDB Atlas and AWS CloudFormation | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/hashicorp-vault-kmip-secrets-engine-mongodb | created | # How to Set Up HashiCorp Vault KMIP Secrets Engine with MongoDB CSFLE or Queryable Encryption
Encryption is proven and trusted and has been around for close to 60 years, but there are gaps. So when we think about moving data (TLS encryption) and storing data (storage encryption), most databases have that covered. But as soon as data is in use, processed by the database, it's in plain text and more vulnerable to insider access and active breaches. Most databases do not have this covered.
With MongoDB’s Client-Side Field Level Encryption (CSFLE) and Queryable Encryption, applications can encrypt sensitive plain text fields in documents prior to transmitting data to the server. This means that data processed by database (in use) will not be in plain text as it’s always encrypted and most importantly still can be queried. The encryption keys used are typically stored in a key management service.
Organizations with a multi-cloud strategy face the challenge of how to manage encryption keys across cloud environments in a standardized way, as the public cloud KMS services use proprietary APIs — e.g., AWS KMS, Azure Key Vault, or GCP KMS — to manage encryption keys. Organizations wanting to have a standardized way of managing the lifecycle of encryption keys can utilize KMIP, Key Management Interoperability Protocol.
As shown in the diagram above, KMIPs eliminate the sprawl of encryption key management services in multiple cloud providers by utilizing a KMIP-enabled key provider. MongoDB CSFLE and Queryable Encryption support KMIP as a key provider.
In this article, I will showcase how to use MongoDB Queryable Encryption and CSFLE with Hashicorp Key Vault KMIP Secrets Engine to have a standardized way of managing the lifecycle of encryption keys regardless of cloud provider.
## Encryption terminology
Before I dive deeper into how to actually use MongoDB CSFLE and Queryable Encryption, I will explain encryption terminology and the common practice to encrypt plain text data.
**Customer Master Key (CMK)** is the encryption key used to protect (encrypt) the Data Encryption Keys, which is on the top level of the encryption hierarchy.
**The Data Encryption Key (DEK)** is used to encrypt the data that is plain text. Once plain text is encrypted by the DEK, it will be in cipher text.
**Plain text** data is unencrypted information that you wish to protect.
**Cipher text** is encrypted information unreadable by a human or computer without decryption.
**Envelope encryption** is the practice of encrypting **plain text** data with a **data encryption key** (DEK) and then encrypting the data key using the **customer master key**.
**The prerequisites to enable querying in CSFLE or Queryable Encryption mode are:**
* A running Key Management System which supports the KMIP standard — e.g., HashiCorp Key Vault. Application configured to use the KMIP endpoint.
* Data Encryption Keys (DEK) created and an encryption JSON schema that is used by a MongoDB driver to know which fields to encrypt.
* An authenticated MongoDB connection with CSFLE/Queryable Encryption enabled.
* You will need a supported server version and a compatible driver version. For this tutorial we are going to use MongoDB Atlas version 6.0. Refer to documentation to see what driver versions for CSFLE or Queryable Encryption is required.
Once the above are fulfilled, this is what happens when a query is executed.
**Step 1:** Upon receiving a query, the MongoDB driver checks to see if any encrypted fields are involved using the JSON encryption schema that is configured when connecting to the database.
**Step 2:** The MongoDB driver requests the Customer Master Key (CMK) key from the KMIP key provider. In our setup, it will be HashiCorp Key Vault.
**Step 3:** The MongoDB driver decrypts the data encryptions keys using the CMK. The DEK is used to encrypt/decrypt the plain text fields. What fields to encrypt/decrypt are defined in the JSON encryption schema. The encrypted data encryption keys are stored in a key vault collection in your MongoDB cluster.
**Step 4:** The driver submits the query to the MongoDB server with the encrypted fields rendered as ciphertext.
**Step 5:** MongoDB returns the encrypted results of the query to the MongoDB driver, still as ciphertext.
**Step 6:** MongoDB Driver decrypts the encrypted fields using DEK to plain text and returns it to the authenticated client.
Next is to actually set up and configure the prerequisites needed to enable querying MongoDB in CSFLE or Queryable Encryption mode.
## What you will set up
So let's look at what's required to install, configure, and run to implement what's described in the section above.
* MongoDB Atlas cluster: MongoDB Atlas is a fully managed data platform for modern applications. Storing data the way it’s accessed as documents makes developers more productive. It provides a document-based database that is cost-efficient and resizable while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It allows you to focus on your applications by providing the foundation of high performance, high availability, security, and compatibility they need. For this tutorial we are going to use MongoDB Atlas version 6.0. Refer to documentation to see what driver versions for CSFLE or Queryable Encryption is required.
* Hashicorp Vault Enterprise: Run and configure the Hashicorp Key Vault **KMIP** Secrets Engine, along with Scopes, Roles, and Certificates.
* Python application: This showcases how CSFLE and Queryable Encryption can be used with HashiCorp Key Vault. I will show you how to configure DEK, JSON Schema, and a MongoDB authenticated client to connect to a database and execute queries that can query on encrypted data stored in a collection in MongoDB Atlas.
## Prerequisites
First off, we need to have at least an Atlas account to provision Atlas and then somewhere to run our automation. You can get an Atlas account for free at mongodb.com. If you want to take this tutorial for a spin, take the time and create your Atlas account now.
You will also need to have Docker installed as we are using a docker container where we have prebaked an image containing all needed dependencies, such as HashiCorp Key Vault, MongoDB Driver, and crypto library.. For more information on how to install Docker, see Get Started with Docker. Also, install the latest version of MongoDB Compass, which we will use to actually see if the fields in collection have been encrypted.
Now we are almost ready to get going. You’ll need to clone this tutorial’s Github repository. You can clone the repo by using the below command:
```
git clone https://github.com/mongodb-developer/mongodb-kmip-fle-queryable
```
There are main four steps to get this tutorial running:
* Retrieval of trial license key for Hashicorp Key Vault
* Update database connection string
* Start docker container, embedded with Hashicorp Key Vault
* Run Python application, showcasing CSFLE and Queryable Encryption
## Retrieval of trial license key for Hashicorp Key Vault
Next is to request a trial license key for Hashicorp Enterprise Key Vault from the Hashicorp product page. Copy the generated license key that is generated.
Replace the content of **license.txt** with the generated license key in the step above. The file is located in the cloned github repository at location kmip-with-hashicorp-key-vault/vault/license.txt.
## Update database connection string
You will need to update the connection string so the Python application can connect to your MongoDB Atlas cluster. It’s best to update both configuration files as this tutorial will demonstrate both CSFLE and Queryable Encryption.
**For CSFLE**: Open file kmip-with-hashicorp-key-vault/configuration\_fle.py line 3, and update connection\_uri.
```
encrypted_namespace = "DEMO-KMIP-FLE.users"
key_vault_namespace = "DEMO-KMIP-FLE.datakeys"
connection_uri = "mongodb+srv://:@?retryWrites=true&w=majority"
# Configure the "kmip" provider.
kms_providers = {
"kmip": {
"endpoint": "localhost:5697"
}
}
kms_tls_options = {
"kmip": {
"tlsCAFile": "vault/certs/FLE/vv-ca.pem",
"tlsCertificateKeyFile": "vault/certs/FLE/vv-client.pem"
}
}
```
Replace , , with your Atlas cluster connection configuration, after you have updated with your Atlas cluster connection details. You should have something looking like this:
```
encrypted_namespace = "DEMO-KMIP-FLE.users"
key_vault_namespace = "DEMO-KMIP-FLE.datakeys"
connection_uri = "mongodb+srv://admin:mPassword@demo-cluster.tcrpd.mongodb.net/myFirstDatabase?retryWrites=true&w=majority"
# Configure the "kmip" provider.
kms_providers = {
"kmip": {
"endpoint": "localhost:5697"
}
}
kms_tls_options = {
"kmip": {
"tlsCAFile": "vault/certs/FLE/vv-ca.pem",
"tlsCertificateKeyFile": "vault/certs/FLE/vv-client.pem"
}
}
```
**For Queryable Encryption**: Open file kmip-with-hashicorp-key-vault/configuration\_queryable.py in the cloned Github repository, update line 3, replace , , with your Atlas cluster connection configuration. So you should have something looking like this, after you have updated with your Atlas cluster connection details.
```
encrypted_namespace = "DEMO-KMIP-QUERYABLE.users"
key_vault_namespace = "DEMO-KMIP-QUERYABLE.datakeys"
connection_uri = "mongodb+srv://admin:mPassword@demo-cluster.tcrpd.mongodb.net/myFirstDatabase?retryWrites=true&w=majority"
# Configure the "kmip" provider.
kms_providers = {
"kmip": {
"endpoint": "localhost:5697"
}
}
kms_tls_options = {
"kmip": {
"tlsCAFile": "vault/certs/QUERYABLE/vv-ca.pem",
"tlsCertificateKeyFile": "vault/certs/QUERYABLE/vv-client.pem"
}
}
```
## Start Docker container
A prebaked docker image is prepared that has HashiCorp Vault installed and a Mongodb shared library. The MongoDB shared library is the translation layer that takes an unencrypted query and translates it into an encrypted format that the server understands. It is what makes it so that you don't need to rewrite all of your queries with explicit encryption calls. You don't need to build the docker image, as it’s already published at docker hub. Start container in root of this repo. Container will be started and the current folder will be mounted to kmip in the running container. Port 8200 is mapped so you will be able to access the Hashicorp Key Vault Console running in the docker container. The ${PWD} is used to set the current path you are running the command from. If running this tutorial on Windows shell, replace ${PWD} with the full path to the root of the cloned Github repository.
```
docker run -p 8200:8200 -it -v ${PWD}:/kmip piepet/mongodb-kmip-vault:latest
```
## Start Hashicorp Key Vault server
Running the below commands within the started docker container will start Hashicorp Vault Server and configure the Hashicorp KMIP Secrets engine. Scopes, Roles, and Certificates will be generated, vv-client.pem, vv-ca.pem, vv-key.pem, separate for CSFLE or Queryable Encryption.
```
cd kmip
./start_and_configure_vault.sh -a
```
Wait until you see the below output in your command console:
You can now access the Hashicorp Key Vault console, by going to url http://localhost:8200/. You should see this in your browser:
Let’s sign in to the Hashicorp console to see what has been configured. Use the “Root token” outputted in your shell console. Once you are logged in you should see this:
The script that you just executed — `./start_and_configure_vault.sh -a` — uses the Hashicorp Vault cli to create all configurations needed, such as Scopes, Roles, and Certificates. You can explore what's created by clicking demo/kmip.
If you want to utilize the Hashicorp Key Vault server from outside the docker container, you will need to add port 5697.
## Run CSFLE Python application encryption
A sample Python application will be used to showcase the capabilities of CSFLE where the encryption schema is defined on the database. Let's start by looking at the main method of the Python application in the file located at `kmip-with-hashicorp-key-vault/vault_encrypt_with_csfle_kmip.py`.
```
def main():
reset()
#1,2 Configure your KMIP Provider and Certificates
kmip_provider_config = configure_kmip_provider()
#3 Configure Encryption Data Keys
data_keys_config = configure_data_keys(kmip_provider_config)
#4 Create collection with Validation Schema for CSFLE defined, will be stored in
create_collection_with_schema_validation(data_keys_config)
#5 Configure Encrypted Client
secure_client=configure_csfle_session()
#6 Run Query
create_user(secure_client)
if __name__ == "__main__":
main()
```
**Row 118:** Drops database, just to simplify rerunning this tutorial. In a production setup, this would be removed.
**Row 120:** Configures the MongoDB driver to use the Hashicorp Vault KMIP secrets engine, as the key provider. This means that CMK will be managed by the Hashicorp Vault KMIP secrets engine.
**Row 122:** Creates Data Encryption Keys to be used to encrypt/decrypt fields in collection. The encrypted data encryption keys will be stored in the database **DEMO-KMIP-FLE** in collection **datakeys**.
**Row 124:** Creates collection and attaches Encryption JSON schema that defines which fields need to be encrypted.
**Row 126:** Creates a MongoClient that enables CSFLE and uses Hashicorp Key Vault KMIP Secrets Engine as the key provider.
**Row 128:** Inserts a user into database **DEMO-KMIP-FLE** and collection **users**, using the MongoClient that is configured at row 126. It then does a lookup on the SSN field to validate that MongoDB driver can query on encrypted data.
Let's start the Python application by executing the below commands in the running docker container:
```
cd /kmip/kmip-with-hashicorp-key-vault/
python3.8 vault_encrypt_with_csfle_kmip.py
```
Start MongoDB Compass, connect to your database DEMO-KMIP-FLE, and review the collection users. Fields that should be encrypted are ssn, contact.mobile, and contact.email. You should now be able to see in Compass that fields that are encrypted are masked by \*\*\*\*\*\* shown as value — see the picture below:
## Run Queryable Encryption Python application
A sample Python application will be used to showcase the capabilities of Queryable Encryption, currently in Public Preview, with schema defined on the server. Let's start by looking at the main method of the Python application in the file located at `kmip-with-hashicorp-key-vault/vault_encrypt_with_queryable_kmip.py`.
```
def main():
reset()
#1,2 Configure your KMIP Provider and Certificates
kmip_provider_config = configure_kmip_provider()
#3 Configure Encryption Data Keys
data_keys_config = configure_data_keys(kmip_provider_config)
#4 Create Schema for Queryable Encryption, will be stored in database
encrypted_fields_map = create_schema(data_keys_config)
#5 Configure Encrypted Client
secure_client = configure_queryable_session(encrypted_fields_map)
#6 Run Query
create_user(secure_client)
if __name__ == "__main__":
main()
```
**Row 121:** Drops database, just to simplify rerunning application. In a production setup, this would be removed.
**Row 123:** Configures the MongoDB driver to use the Hashicorp Vault KMIP secrets engine, as the key provider. This means that CMK will be managed by the Hashicorp Vault KMIP secrets engine.
**Row 125:** Creates Data Encryption Keys to be used to encrypt/decrypt fields in collection. The encrypted data encryption keys will be stored in the database **DEMO-KMIP-QUERYABLE** in collection datakeys.
**Row 127:** Creates Encryption Schema that defines which fields need to be encrypted. It’s important to note the encryption schema has a different format compared to CSFLE Encryption schema.
**Row 129:** Creates a MongoClient that enables Queryable Encryption and uses Hashicorp Key Vault KMIP Secrets Engine as the key provider.
**Row 131:** Inserts a user into database **DEMO-KMIP-QUERYABLE** and collection **users**, using the MongoClient that is configured at row 129. It then does a lookup on the SSN field to validate that MongoDB driver can query on encrypted data.
Let's start the Python application to test Queryable Encryption.
```
cd /kmip/kmip-with-hashicorp-key-vault/
python3.8 vault_encrypt_with_queryable_kmip.py
```
Start MongoDB Compass, connect to your database DEMO-KMIP-QUERYABLE, and review the collection users. Fields that should be encrypted are ssn, contact.mobile, and contact.email. You should now be able to see in Compass that fields that are encrypted are masked by \*\*\*\*\*\* shown as value, as seen in the picture below.
### Cleanup
If you want to rerun the tutorial, run the following in the root of this git repository outside the docker container.
```
./cleanup.sh
```
## Conclusion
In this blog, you have learned how to configure and set up CSFLE and Queryble Encryption with Hashicorp Key Vault KMIP secrets engine. By utilizing KMIP, you will have a standardized way of managing the lifecycle of encryption keys, regardless of Public Cloud KMS services.. Learn more about CSFLE and Queryable Encryption. | md | {
"tags": [
"Atlas",
"Python"
],
"pageDescription": "In this blog, learn how to use Hashicorp Vault KMIP Secrets Engine with CSFLE and Queryable Encryption to have a standardized way of managing encryption keys.",
"contentType": "Tutorial"
} | How to Set Up HashiCorp Vault KMIP Secrets Engine with MongoDB CSFLE or Queryable Encryption | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/how-seamlessly-use-mongodb-atlas-ibm-watsonx-ai-genai-applications | created | # How to Seamlessly Use MongoDB Atlas and IBM watsonx.ai LLMs in Your GenAI Applications
One of the challenges of e-commerce applications is to provide relevant and personalized product recommendations to customers. Traditional keyword-based search methods often fail to capture the semantic meaning and intent of the user search queries, and return results that do not meet the user’s needs. In turn, they fail to convert into a successful sale. To address this problem, RAG (retrieval-augmented generation) is used as a framework powered by MongoDB Atlas Vector Search, LangChain, and IBM watsonx.ai.
RAG is a natural language generation (NLG) technique that leverages a retriever module to fetch relevant documents from a large corpus and a generator module to produce text conditioned on the retrieved documents. Here, the RAG framework is used to power product recommendations as an extension to existing semantic search techniques.
- RAG use cases can be easily built using the vector search capabilities of MongoDB Atlas to store and query large-scale product embeddings that represent the features and attributes of each product. Because of MongoDB’s flexible schema, these are stored right alongside the product embeddings, eliminating the complexity and latency of having to retrieve the data from separate tables or databases.
- RAG then retrieves the most similar products to the user query based on the cosine similarity of their embeddings, and generates natural language reasons that highlight why these products are relevant and appealing to the user.
- RAG can also enhance the user experience (UX) by handling complex and diverse search queries, such as "a cozy sweater for winter" or "a gift for my daughter who is interested in science", and provides accurate and engaging product recommendations that increase customer satisfaction and loyalty.
is IBM’s next-generation enterprise studio for AI builders, bringing together new generative AI capabilities with traditional machine learning (ML) that span the entire AI lifecycle. With watsonx.ai, you can train, validate, tune, and deploy foundation and traditional ML models.
watsonx.ai brings forth a curated library of foundation models, including IBM-developed models, open-source models, and models sourced from third-party providers. Not all models are created equal, and the curated library provides enterprises with the optionality to select the model best suited to a particular use case, industry, domain, or even price performance. Further, IBM-developed models, such as the Granite model series, offer another level of enterprise-readiness, transparency, and indemnification for production use cases. We’ll be using Granite models in our demonstration. For the interested reader, IBM has published information about its data and training methodology for its Granite foundation models.
## How to build a custom RAG-powered product discovery pipeline
For this tutorial, we will be using an e-commerce products dataset containing over 10,000 product details. We will be using the sentence-transformers/all-mpnet-base-v2 model from Hugging Face to generate the vector embeddings to store and retrieve product information. You will need a Python notebook or an IDE, a MongoDB Atlas account, and a wastonx.ai account for hands-on experience.
For convenience, the notebook to follow along and execute in your environment is available on GitHub.
### Python dependencies
* `langchain`: Orchestration framework
* `ibm-watson-machine-learning`: For IBM LLMs
* `wget`: To download knowledge base data
* `sentence-transformers`: For embedding model
* `pymongo`: For the MongoDB Atlas vector store
### watsonx.ai dependencies
We’ll be using the watsonx.ai foundation models and Python SDK to implement our RAG pipeline in LangChain.
1. **Sign up for a free watsonx.ai trial on IBM cloud**. Register and get set up.
2. **Create a watsonx.ai Project**. During onboarding, a sandbox project can be quickly created for you. You can either use the sandbox project or create one; the link will work once you have registered and set up watsonx.ai. If more help is needed, you can read the documentation.
3. **Create an API key to access watsonx.ai foundation models**. Follow the steps to create your API key.
4. **Install and use watsonx.ai**. Also known as the IBM Watson Machine Learning SDK, watsonx.ai SDK information is available on GitHub. Like any other Python module, you can install it with a pip install. Our example notebook takes care of this for you.
We will be running all the code snippets below in a Jupyter notebook. You can choose to run these on VS Code or any other IDE of your choice.
**Initialize the LLM**
Initialize the watsonx URL to connect by running the below code blocks in your Jupyter notebook:
```python
# watsonx URL
try:
wxa_url = os.environ"WXA_URL"]
except KeyError:
wxa_url = getpass.getpass("Please enter your watsonx.ai URL domain (hit enter): ")
```
Enter the URL for accessing the watsonx URL domain. For example: https://us-south.ml.cloud.ibm.com.
To be able to access the LLM models and other AI services on watsonx, you need to initialize the API key. You init the API key by running the following code block in you Jupyter notebook:
```python
# watsonx API Key
try:
wxa_api_key = os.environ["WXA_API_KEY"]
except KeyError:
wxa_api_key = getpass.getpass("Please enter your watsonx.ai API key (hit enter): ")
```
You will be prompted when you run the above code to add the IAM API key you fetched earlier.
Each experiment can tagged or executed under specific projects. To fetch the relevant project, we can initialize the project ID by running the below code block in the Jupyter notebook:
```python
# watsonx Project ID
try:
wxa_project_id = os.environ["WXA_PROJECT_ID"]
except KeyError:
wxa_project_id = getpass.getpass("Please enter your watsonx.ai Project ID (hit enter): ")
```
You can find the project ID alongside your IAM API key in the settings panel in the watsonx.ai portal.
**Language model**
In the code example below, we will initialize Granite LLM from IBM and then demonstrate how to use the initialized LLM with the LangChain framework before we build our RAG.
We will use the query: "I want to introduce my daughter to science and spark her enthusiasm. What kind of gifts should I get her?"
This will help us demonstrate how the LLM and vector search work in an RAG framework at each step.
Firstly, let us initialize the LLM hosted on the watsonx cloud. To access the relevant Granite model from watsonx, you need to run the following code block to initialize and test the model with our sample query in the Jupyter notebook:
```python
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
from ibm_watson_machine_learning.foundation_models import Model
from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams
from ibm_watson_machine_learning.foundation_models.utils.enums import DecodingMethods
parameters = {
GenParams.DECODING_METHOD: DecodingMethods.GREEDY,
GenParams.MIN_NEW_TOKENS: 1,
GenParams.MAX_NEW_TOKENS: 100
}
model = Model(
model_id=ModelTypes.GRANITE_13B_INSTRUCT,
params=parameters,
credentials={
"url": wxa_url,
"apikey": wxa_api_key
},
project_id=wxa_project_id
)
from ibm_watson_machine_learning.foundation_models.extensions.langchain import WatsonxLLM
granite_llm_ibm = WatsonxLLM(model=model)
# Sample query chosen in the example to evaluate the RAG use case
query = "I want to introduce my daughter to science and spark her enthusiasm. What kind of gifts should I get her?"
# Sample LLM query without RAG framework
result = granite_llm_ibm(query)
```
Output:
![Jupyter Notebook Output][3]
### Initialize MongoDB Atlas for vector search
Prior to starting this section, you should have already set up a cluster in MongoDB Atlas. If you have not created one for yourself, then you can follow the steps in the [MongoDB Atlas tutorial to create an account in Atlas (the developer data platform) and a cluster with which we can store and retrieve data. It is also advised that the users spin an Atlas dedicated cluster with size M10 or higher for this tutorial.
Now, let us see how we can set up MongoDB Atlas to provide relevant information to augment our RAG framework.
**Init Mongo client**
We can connect to the MongoDB Atlas cluster using the connection string as detailed in the tutorial link above. To initialize the connection string, run the below code block in your Jupyter notebook:
```python
from pymongo import MongoClient
try:
MONGO_CONN = os.environ"MONGO_CONN"]
except KeyError:
MONGO_CONN = getpass.getpass("Please enter your MongoDB connection String (hit enter): ")
```
When prompted, you can enter your MongoDB Atlas connection string.
**Download and load data to MongoDB Atlas**
In the steps below, we demonstrate how to download the products dataset from the provided URL link and add the documents to the respective collection in MongoDB Atlas. We will also be embedding the raw product texts as vectors before adding them in MongoDB. You can do this by running the following lines of code your Jupyter notebook:
```python
import wget
filename = './amazon-products.jsonl'
url = "https://github.com/ashwin-gangadhar-mdb/mbd-watson-rag/raw/main/amazon-products.jsonl"
if not os.path.isfile(filename):
wget.download(url, out=filename)
# Load the documents using Langchain Document Loader
from langchain.document_loaders import JSONLoader
loader = JSONLoader(file_path=filename, jq_schema=".text",text_content=False,json_lines=True)
docs = loader.load()
# Initialize Embedding for transforming raw documents to vectors**
from langchain.embeddings import HuggingFaceEmbeddings
from tqdm import tqdm as notebook_tqdm
embeddings = HuggingFaceEmbeddings()
# Initialize MongoDB client along with Langchain connector module
from langchain.vectorstores import MongoDBAtlasVectorSearch
client = MongoClient(MONGO_CONN)
vcol = client["amazon"]["products"]
vectorstore = MongoDBAtlasVectorSearch(vcol, embeddings)
# Load documents to collection in MongoDB**
vectorstore.add_documents(docs)
```
You will be able to see the documents have been created in `amazon` database under the collection `products`.
![MongoDB Atlas Products Collection][4]
Now all the product information is added to the respective collection, we can go ahead and create a vector index by following the steps given in the [Atlas Search index tutorial. You can create the search index using both the Atlas UI as well as programmatically. Let us look at the steps if we are doing this using the Atlas UI.
.
**Sample query to vector search**
We can test the vector similarity search by running the sample query with the LangChain MongoDB Atlas Vector Search connector. Run the following code in your Jupyter notebook:
```python
texts_sim = vectorstore.similarity_search(query, k=3)
print("Number of relevant texts: " + str(len(texts_sim)))
print("First 100 characters of relevant texts.")
for i in range(len(texts_sim)):
print("Text " + str(i) + ": " + str(texts_simi].page_content[0:100]))
```
![Sample Vector Search Query Output][7]
In the above example code, we are able to use our sample text query to retrieve three relevant products. Further in the tutorial, let’s see how we can combine the capabilities of LLMs and vector search to build a RAG framework. For further information on various operations you can perform with the `MongoDBAtlasVectorSearch` module in LangChain, you can visit the [Atlas Vector Search documentation.
### RAG chain
In the code snippets below, we demonstrate how to initialize and query the RAG chain. We also introduce methods to improve the output from RAG so you can customize your output to cater to specific needs, such as the reason behind the product recommendation, language translation, summarization, etc.
So, you can set up the RAG chain and execute to get the response for our sample query by running the following lines of code in your Jupyter notebook:
```python
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(template="""
Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
##Question:{question} \n\
##Top 3 recommnedations from Products and reason:\n""",input_variables="context","question"])
chain_type_kwargs = {"prompt": prompt}
retriever = vectorstore.as_retriever(search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25})
qa = RetrievalQA.from_chain_type(llm=granite_llm_ibm, chain_type="stuff",
retriever=retriever,
chain_type_kwargs=chain_type_kwargs)
res = qa.run(query)
print(f"{'-'*50}")
print("Query: " + query)
print(f"Response:\n{res}\n{'-'*50}\n")
```
The output will look like this:
![Sample RAG Chain Output][8]
You can see from the example output where the RAG is able to recommend products based on the query as well as provide a reasoning or explanation as to how this product suggestion is relevant to the query, thereby enhancing the user experience.
## Conclusion
In this tutorial, we demonstrated how to use watsonx LLMs along with Atlas Vector Search to build a RAG framework. We also demonstrated how to efficiently use the RAG framework to customize your application needs, such as the reasoning for product suggestions. By following the steps in the article, we were also able to bring the power of machine learning models to a private knowledge base that is stored in the Atlas Developer Data Platform.
In summary, RAG is a powerful NLG technique that can generate product recommendations as an extension to semantic search using vector search capabilities provided by MongoDB Atlas. RAG can also improve the UX of product recommendations by providing more personalized, diverse, informative, and engaging descriptions.
## Next steps
Explore more details on how you can [build generative AI applications using various assisted technologies and MongoDB Atlas Vector Search.
To learn more about Atlas Vector Search, visit the product page or the documentation for creating a vector search index or running vector search queries.
To learn more about watsonx, visit the IBM watsonx page.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltabec5b11a292b3d6/6553a958c787a446c12ab071/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte019110c36fc59f5/6553a916c787a4d4282ab069/image3.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltabec5b11a292b3d6/6553a958c787a446c12ab071/image1.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta69be6d193654a53/6553a9ad4d2859f3c8afae47/image5.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt70003655ac1919b7/6553a9d99f2b9963f6bc99de/image6.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcbdae931b43cc17a/6553a9f788cbda51858566f6/image2.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0b43cc03bf7bb27f/6553aa124452cc3ed9f9523d/image7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf6c21ef667b8470b/6553aa339f2b993db7bc99e3/image4.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "Learn how to build a RAG framework using MongoDB Atlas Vector Search and IBM watsonx LLMs.",
"contentType": "Tutorial"
} | How to Seamlessly Use MongoDB Atlas and IBM watsonx.ai LLMs in Your GenAI Applications | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/six-principles-building-robust-flexible-shared-data-applications | created | # The Six Principles for Building Robust Yet Flexible Shared Data Applications
I've spent my seven years employed at MongoDB Inc. thinking about how organisations can better build fluid data-intensive applications. Over the years, in conversations with clients, I've tried to convey my opinions of how this can be achieved, but in hindsight, I've only had limited success, due to my inability to articulate the "why" and the "how" properly. In fact, the more I reflect, the more I realise it's a theme I've been jostling with for most of my IT career. For example, back in 2008, when SOAP was still commonplace for building web services, I touched on a similar theme in my blog post Web Service Messaging Nirvana. Now, after quite some time, I feel like I've finally been able to locate the signals in the noise, and capture these into something cohesive and positively actionable by others...
So, I've now brought together a set of techniques I've identified to effectively deliver resilient yet evolvable data-driven applications, in a recorded online 45-minute talk, which you can view below.
>The Six Principles For Resilient Evolvability by Paul Done.
>
>:youtube]{vid=ms-2kgZbdGU}
You can also scan through the slides I used for the talk, [here.
I've also shared, on Github, a sample Rust application I built that highlights some of the patterns described.
In my talk, you will hear about the potential friction that can occur with multiple applications on different release trains, due to overlapping dependencies on a shared data set. Without forethought, the impact of making shared data model changes to meet new requirements for one application can result in needing to modify every other application too, dramatically reducing business agility and flexibility. You might be asking yourself, "If this shared data is held in a modern real-time operational database like MongoDB, why isn't MongoDB's flexible data model sufficient to allow applications and services to easily evolve?" My talk will convey why this is a naive assumption made by some, and why the adoption of specific best practices, in your application tier, is also required to mitigate this.
In the talk, I identify the resulting best practices as a set of six key principles, which I refer to as "resilient evolvability." Below is a summary of the six principles:
1. Support optional fields. Field absence conveys meaning.
2. For Finds, only ask for fields that are your concern, to support variability and to reduce change dependency.
3. For Updates, always use in-place operators, changing targeted fields only. Replacing whole documents blows away changes made by other applications.
4. For the rare data model Mutative Changes, adopt "Interim Duplication" to reduce delaying high priority business requirements.
5. Facilitate entity variance, because real-world entities do vary, especially when a business evolves and diversifies.
6. Only use Document Mappers if they are NOT "all or nothing," and only if they play nicely with the other five principles.
Additionally, in the talk, I capture my perspective on the three different distributed application/data architectural combinations I often see, which I call "The Data Access Triangle."
In essence, my talk is primarily focussed on how to achieve agility and flexibility when Shared Data is being used by many applications or services, but some of the principles will still apply when using Isolated Data or Duplicated Data for each application or service.
## Wrap-Up
From experience, by adopting the six principles, I firmly believe:
- Your software will enable varying structured data which embraces, rather than inhibits, real-world requirements.
- Your software won't break when additive data model changes occur, to rapidly meet new business requirements.
- You will have a process to deal with mutative data model changes, which reduces delays in delivering new business requirements.
This talk and its advice is the culmination of many years trying to solve and address the problems in this space. I hope you will find my guidance to be a useful contribution to your work and a set of principles everyone can build on in the future.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to build robust yet flexible shared data applications which don't break when data model changes occur, to rapidly meet new business requirements.",
"contentType": "Article"
} | The Six Principles for Building Robust Yet Flexible Shared Data Applications | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/mongodb-classmaps-optimal-performance | created | # How to Set Up MongoDB Class Maps for C# for Optimal Query Performance and Storage Size
> Starting out with MongoDB and C#? These tips will help you get your class maps right from the beginning to support your desired schema.
When starting my first projects with MongoDB and C# several years ago, what captivated me the most was how easy it was to store plain old CLR objects (POCOs) in a collection without having to create a static relational structure first and maintaining it painfully over the course of development.
Though MongoDB and C# have their own set of data types and naming conventions, the MongoDB C# Driver connects the two in a very seamless manner. At the center of this, class maps are used to describe the details of the mapping.
This post shows how to fine-tune the mapping in key areas and offers solutions to common scenarios.
## Automatic mapping
Even if you don't define a class map explicitly, the driver will create one as soon as the class is used for a collection. In this case, the properties of the POCO are mapped to elements in the BSON document based on the name. The driver also tries to match the property type to the BSON type of the element in MongoDB.
Though automatic mapping of a class will make sure that POCOs can be stored in a collection easily, tweaking the mapping is rewarded by better memory efficiency and enhanced query performance. Also, if you are working with existing data, customizing the mapping allows POCOs to follow C# and .NET naming conventions without changing the schema of the data in the collection.
## Declarative vs. imperative mapping
Adjusting the class map can be as easy as adding attributes to the declaration of a POCO (declarative mapping). These attributes are used by the driver when the class map is auto-mapped. This happens when the class is first used to access data in a collection:
```csharp
public class BlogPost
{
// ...
BsonElement("title")]
public string Title { get; set; } = string.Empty;
// ...
}
```
The above sample shows how the `BsonElement` attribute is used to adjust the name of the `Title` property in a document in MongoDB:
```BSON
{
// ...
"title": "Blog post title",
// ...
}
```
However, there are scenarios when declarative mapping is not applicable: If you cannot change the POCOs because they are defined in a third-party libary or if you want to separate your POCOs from MongoDB-related code parts, there also is the option to define the class maps imperatively by calling methods in code:
```csharp
BsonClassMap.RegisterClassMap(cm =>
{
cm.AutoMap();
cm.MapMember(x => x.Title).SetElementName("title");
});
```
The code above first performs the auto-mapping and then includes the `Title` property in the mapping as an element named `title` in BSON, thus overriding the auto-mapping for the specific property.
One thing to keep in mind is that the class map needs to be registered before the driver starts the automatic mapping process for a class. It is a good idea to include it in the bootstrapping process of the application.
This post will use declarative mapping for better readability but all of the adjustments can also be made using imperative mapping, as well. You can find an imperative class map that contains all the samples at the end of the post.
## Adjusting property names
Whether you are working with existing data or want to name properties differently in BSON for other reasons, you can use the `BsonElement("specificElementName")` attribute introduced above. This is especially handy if you only want to change the name of a limited set of properties.
If you want to change the naming scheme in a widespread fashion, you can use a convention that is applied when auto-mapping the classes. The driver offers a number of conventions out-of-the-box (see the namespace [MongoDB.Bson.Serialization.Conventions) and offers the flexibility to create custom ones if those are not sufficient.
An example is to name the POCO properties according to C# naming guidelines in Pascal case in C#, but name the elements in camel case in BSON by adding the CamelCaseElementNameConvention:
```csharp
var pack = new ConventionPack();
pack.Add(new CamelCaseElementNameConvention());
ConventionRegistry.Register(
"Camel Case Convention",
pack,
t => true);
```
Please note the predicate in the last parameter. This can be used to fine-tune whether the convention is applied to a type or not. In our sample, it is applied to all classes.
The above code needs to be run before auto-mapping takes place. You can still apply a `BsonElement` attribute here and there if you want to overwrite some of the names.
## Using ObjectIds as identifiers
MongoDB uses ObjectIds as identifiers for documents by default for the “_id” field. This is a data type that is unique to a very high probability and needs 12 bytes of memory. If you are working with existing data, you will encounter ObjectIds for sure. Also, when setting up new documents, ObjectIds are the preferred choice for identifiers. In comparison to GUIDs (UUIDs), they require less storage space and are ordered so that identifiers that are created later receive higher values.
In C#, properties can use `ObjectId` as their type. However, using `string` as the property type in C# simplifies the handling of the identifiers and increases interoperability with other frameworks that are not specific to MongoDB (e.g. OData).
In contrast, MongoDB should serialize the identifiers with the specific BSON type ObjectId to reduce storage size. In addition, performing a binary comparison on ObjectIds is much safer than comparing strings as you do not have to take letter casing, etc. into account.
```csharp
public class BlogPost
{
BsonRepresentation(BsonType.ObjectId)]
public string Id { get; set; } = ObjectId.GenerateNewId().ToString();
// ...
[BsonRepresentation(BsonType.ObjectId)]
public ICollection TopComments { get; set; } = new List();
}
```
By applying the `BsonRepresentation` attribute, the `Id` property is serialized as an `ObjectId` in BSON. Also, the array of identifiers in `TopComments` also uses ObjectIds as their data type for the array elements:
```BSON
{
"_id" : ObjectId("6569b12c6240d94108a10d20"),
// ...
"TopComments" : [
ObjectId("6569b12c6240d94108a10d21"),
ObjectId("6569b12c6240d94108a10d22")
]
}
```
## Serializing GUIDs in a consistent way
While `ObjectId` is the default type of identifier for MongoDB, GUIDs or UUIDs are a data type that is used for identifying objects in a variety of programming languages. In order to store and query them efficiently, using a binary format instead of strings is also preferred.
In the past, GUIDs/UUIDs have been stored as BSON type binary of subtype 3; drivers for different programming environments serialized the value differently. Hence, reading GUIDs with the C# driver that had been serialized with a Java driver did not yield the same value. To fix this, the new binary subtype 4 was introduced by MongoDB. GUIDs/UUIDs are then serialized in the same way across drivers and languages.
To provide the flexibility to both handle existing values and new values on a property level, the MongoDB C# Driver introduced a new way of handling GUIDs. This is referred to as `GuidRepresentationMode.V3`. For backward compatibility, when using Version 2.x of the MongoDB C# Driver, the GuidRepresentationMode is V2 by default (resulting in binary subtype 3). This is set to change with MongoDB C# Driver version 3. It is a good idea to opt into using V3 now and specify the subtype that should be used for GUIDs on a property level. For new GUIDs, subtype 4 should be used.
This can be achieved by running the following code before creating the client:
```csharp
BsonDefaults.GuidRepresentationMode
= GuidRepresentationMode.V3;
```
Keep in mind that this setting requires the representation of the GUID to be specified on a property level. Otherwise, a `BsonSerializationException` will be thrown informing you that "GuidSerializer cannot serialize a Guid when GuidRepresentation is Unspecified." To fix this, add a `BsonGuidRepresentation` attribute to the property:
```csharp
[BsonGuidRepresentation(GuidRepresentation.Standard)]
public Guid MyGuid { get; set; } = Guid.NewGuid();
```
There are various settings available for `GuidRepresentation`. For new GUIDs, `Standard` is the preferred value, while the other values (e.g., `CSharpLegacy`) support the serialization of existing values in binary subtype 3.
For a detailed overview, see the [documentation of the driver.
## Processing extra elements
Maybe you are working with existing data and only some part of the elements is relevant to your use case. Or you have older documents in your collection that contain elements that are not relevant anymore. Whatever the reason, you want to keep the POCO minimal so that it only comprises the relevant properties.
By default, the MongoDB C# Driver is strict and raises a `FormatException` if it encounters elements in a BSON document that cannot be mapped to a property on the POCO:
```"Element '...]' does not match any field or property of class [...]."```
Those elements are called "extra elements."
One way to handle this is to simply ignore extra elements by applying the `BsonIgnoreExtraElements` attribute to the POCO:
```csharp
[BsonIgnoreExtraElements]
public class BlogPost
{
// ...
}
```
If you want to use this behavior on a large scale, you can again register a convention:
```csharp
var pack = new ConventionPack();
pack.Add(new IgnoreExtraElementsConvention(true));
ConventionRegistry.Register(
"Ignore Extra Elements Convention",
pack,
t => true);
```
Be aware that if you use _replace_ when storing the document, extra properties that C# does not know about will be lost.
On the other hand, MongoDB's flexible schema is built for handling documents with different elements. If you are interested in the extra properties or you want to safeguard for a replace, you can add a dictionary to your POCO and mark it with a `BsonExtraElements` attribute. The dictionary is filled with the content of the properties upon deserialization:
```csharp
public class BlogPost
{
// ...
[BsonExtraElements()]
public IDictionary ExtraElements { get; set; } = new Dictionary();
}
```
Even when replacing a document that contains an extra-elements-dictionary, the key-value pairs of the dictionary are serialized as elements so that their content is not lost (or even updated if the value in the dictionary has been changed).
## Serializing calculated properties
Pre-calculation is key for great query performance and is a common pattern when working with MongoDB. In POCOs, this is supported by adding read-only properties, e.g.:
```csharp
public class BlogPost
{
// ...
public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
public DateTime? UpdatedAt { get; set; }
public DateTime LastChangeAt => UpdatedAt ?? CreatedAt;
}
```
By default, the driver excludes read-only properties from serialization. This can be fixed easily by applying a `BsonElement` attribute to the property — you don't need to change the name:
```csharp
public class BlogPost
{
// ...
public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
public DateTime? UpdatedAt { get; set; }
[BsonElement()]
public DateTime LastChangeAt => UpdatedAt ?? CreatedAt;
}
```
After this change, the read-only property is included in the document and it can be used in indexes and queries:
```BSON
{
// ...
"CreatedAt" : ISODate("2023-12-01T12:16:34.441Z"),
"UpdatedAt" : null,
"LastChangeAt" : ISODate("2023-12-01T12:16:34.441Z")
}
```
## Custom serializers
Common scenarios are very well supported by the MongoDB C# Driver. If this is not enough, you can create a [custom serializer that supports your specific scenario.
Custom serializers can be used to handle documents with different data for the same element. For instance, if some documents store the year as an integer and others as a string, a custom serializer can analyze the BSON type during deserialization and read the value accordingly.
However, this is a last resort that you will rarely need to use as the existing options offered by the MongoDB C# Driver cover the vast majority of use cases.
## Conclusion
As you have seen, the MongoDB C# Driver offers a lot of options to tweak the mapping between POCOs and BSON documents. POCOs can follow C# conventions while at the same time building upon a schema that offers good query performance and reduced storage consumption.
If you have questions or comments, join us in the MongoDB Developer Community!
### Appendix: sample for imperative class map
```csharp
BsonClassMap.RegisterClassMap(cm =>
{
// Perform auto-mapping to include properties
// without specific mappings
cm.AutoMap();
// Serialize string as ObjectId
cm.MapIdMember(x => x.Id)
.SetSerializer(new StringSerializer(BsonType.ObjectId));
// Serialize ICollection as array of ObjectIds
cm.MapMember(x => x.TopComments)
.SetSerializer(
new IEnumerableDeserializingAsCollectionSerializer, string, List>(
new StringSerializer(BsonType.ObjectId)));
// Change member name
cm.MapMember(x => x.Title).SetElementName("title");
// Serialize Guid as binary subtype 4
cm.MapMember(x => x.MyGuid).SetSerializer(new GuidSerializer(GuidRepresentation.Standard));
// Store extra members in dictionary
cm.MapExtraElementsMember(x => x.ExtraElements);
// Include read-only property
cm.MapMember(x => x.LastChangeAt);
});
```
| md | {
"tags": [
"C#",
".NET"
],
"pageDescription": "",
"contentType": "Article"
} | How to Set Up MongoDB Class Maps for C# for Optimal Query Performance and Storage Size | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/pymongoarrow-bigframes-using-python | created | # Orchestrating MongoDB & BigQuery for ML Excellence with PyMongoArrow and BigQuery Pandas Libraries
In today's data-driven world, the ability to analyze and efficiently move data across different platforms is crucial. MongoDB Atlas and Google BigQuery are two powerful platforms frequently used for managing and analyzing data. While they excel in their respective domains, connecting and transferring data between them seamlessly can pose challenges. However, with the right tools and techniques, this integration becomes not only possible but also streamlined.
One effective way to establish a smooth pipeline between MongoDB Atlas and BigQuery is by leveraging PyMongoArrow and pandas-gbq, two powerful Python libraries that facilitate data transfer and manipulation. PyMongoArrow acts as a bridge between MongoDB and Arrow, a columnar in-memory analytics layer, enabling efficient data conversion. On the other hand, pandas-gbq is a Python client library for Google BigQuery, allowing easy interaction with BigQuery datasets.
on data read from both Google BigQuery and MongoDB Atlas platforms without physically moving the data between these platforms. This will simplify the effort required by data engineers to move the data and offers a faster way for data scientists to build machine learning (ML) models.
Let's discuss each of the implementation advantages with examples.
### ETL data from MongoDB to BigQuery
Let’s consider a sample shipwreck dataset available on MongoDB Atlas for this use case.
Use the commands below to install the required libraries on the notebook environment of your choice. For easy and scalable setup, use BigQuery Jupyter notebooks or managed VertexAI workbench notebooks.
for setting up your cluster, network access, and authentication. Load a sample dataset to your Atlas cluster. Get the Atlas connection string and replace the URI string below with your connection string. The below script is also available in the GitHub repository with steps to set up.
```python
#Read data from MongoDB
import certifi
import pprint
import pymongo
import pymongoarrow
from pymongo import MongoClient
client = MongoClient("URI ``sting``",tlsCAFile=certifi.where())
#Initialize database and collection
db = client.get_database("sample_geospatial")
col = db.get_collection("shipwrecks")
for doc in col.find({}):
pprint.pprint(doc)
from pymongoarrow.monkey import patch_all
patch_all()
#Create Dataframe for data read from MongoDB
import pandas as pd
df = col.find_pandas_all({})
```
Transform the data to the required format — e.g., transform and remove the unsupported data formats, like the MongoDB object ID, or convert the MongoDB object to JSON before writing it to BigQuery. Please refer to the documentation to learn more about data types supported by pandas-gbq and PyMongoArrow.
```python
#Transform the schema for required format.
#e.g. the object id is not supported in dataframes can be removed or converted to string.
del(df"_id"])
```
Once you have retrieved data from MongoDB Atlas and converted it into a suitable format using PyMongoArrow, you can proceed to transfer it to BigQuery using either the pandas-gbq or google-cloud-bigquery. In this article, we are using pandas-gbq. Refer to the [documentation for more details on the differences between pandas-gbq and google-cloud-bigquery libraries. Ensure you have a dataset in BigQuery to which you want to load the MongoDB data. You can create a new dataset or use an existing one.
```python
#Write the transformed data to BigQuery.
import pandas_gbq
pandas_gbq.to_gbq(df0:100], "gcp-project-name.bigquery-dataset-name.bigquery-table-name", project_id="gcp-project-name")
```
As you embark on building your pipeline, optimizing the data transfer process between MongoDB Atlas and BigQuery is essential for performance. A few points to consider:
1. Batch Dataframes in to chunks, especially when dealing with large datasets, to prevent memory issues.
1. Handle schema mapping and data type conversions properly to ensure compatibility between the source and destination databases.
1. With the right tools like Google colab, VertexAI Workbench etc, this pipeline can become a cornerstone of your data ecosystem, facilitating smooth and reliable data movement between MongoDB Atlas and Google BigQuery.
### Introduction to Google BigQuery DataFrames (bigframes)
Google bigframes is a Python API that provides a pandas-compatible DataFrame and machine learning capabilities powered by the BigQuery engine. It provides a familiar pandas interface for data manipulation and analysis. Once the data from MongoDB is written into BigQuery, the BigQuery DataFrames can unlock the user-friendly solution for analyzing petabytes of data with ease. The pandas DataFrame can be read directly into BigQuery DataFrames using the Python bigframes.pandas library. Install the bigframes library to use BigQuery DataFrames.
```
!pip install bigframes
```
Before reading the pandas DataFrames into BigQuery DataFrames, rename the columns as per Google's [schema guidelines. (Please note that at the time of publication, the feature may not be GA).
```python
import bigframes.pandas as bpd
bigframes.options.bigquery.project = "GCP project ID"
# df =
bdf = bpd.read_pandas(df)
```
For more information on using Google Cloud Bigquery DataFrames, visit the Google Cloud documentation.
## Conclusion
Creating a robust pipeline between MongoDB Atlas and BigQuery using PyMongoArrow and pandas-gbq opens up a world of possibilities for efficient data movement and analysis. This integration allows for the seamless transfer of data, enabling organizations to leverage the strengths of both platforms for comprehensive data analytics and decision-making.
### Further reading
- Learn more about MongoDB PyMongoArrow libraries and how to use them.
- Read more about Google BigQuery DataFrames for pandas and ML.
- Load DataFrames to BigQuery with Google pandas-gbq.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2557413e5cba18f3/65c3cbd20872227d14497236/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7459376aeef23b01/65c3cbd2245ed9a8b190fd38/image2.png | md | {
"tags": [
"MongoDB",
"Python",
"Pandas",
"Google Cloud",
"AI"
],
"pageDescription": "Orchestrating MongoDB & BigQuery for ML Excellence with PyMongoArrow and BigQuery Pandas Librarie",
"contentType": "Tutorial"
} | Orchestrating MongoDB & BigQuery for ML Excellence with PyMongoArrow and BigQuery Pandas Libraries | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/kotlin/mastering-kotlin-creating-api-ktor-mongodb-atlas | created | # Mastering Kotlin: Creating an API With Ktor and MongoDB Atlas
Kotlin's simplicity, Java interoperability, and Ktor's user-friendly framework combined with MongoDB Atlas' flexible cloud database provide a robust stack for modern software development.
Together, we'll demonstrate and set up the Ktor project, implement CRUD operations, define API route endpoints, and run the application. By the end, you'll have a solid understanding of Kotlin's capabilities in API development and the tools needed to succeed in modern software development.
## Demonstration
.
.
Once your account is created, access the **Overview** menu, then **Connect**, and select **Kotlin**. After that, our connection **string** will be available as shown in the image below:
Any questions? Come chat with us in the MongoDB Developer Community.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt02db98d69407f577/65ce9329971dbb3e733ff0fa/1.gif
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt45cf0f5548981055/65ce9d77719d5654e2e86701/1.gif
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt79f17d2600ccd262/65ce9350800623c03507858f/2.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltddd64b3284f120d4/65ce93669be818cb46d5a628/3.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt69bc7f53eff9cace/65ce937f76c8be10aa75c034/4.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt317cc4b60864461a/65ce939b6b67f967a3ee2723/5.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt69681844c8a7b8b0/65ce93bb8e125b05b739af2c/6.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt49fa907ced0329aa/65ce93df915aea23533354e0/7.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta3c3a68abaced0e0/65ce93f0fc5dbd56d22d5e4c/8.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blteeebb531c794f095/65ce9400c3164b51b2b471dd/9.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4f1f4abbb7515c18/65ce940f5c321d8136be1c12/10.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb837240b887cdde2/65ce9424f09ec82e0619a7ab/11.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c37f97d439b319a/65ce9437bccfe25e8ce992ab/12.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4791dadc3ebecb2e/65ce9448c3164b1ffeb471e9/13.png | md | {
"tags": [
"Kotlin"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Mastering Kotlin: Creating an API With Ktor and MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/atlas-search-with-csharp | created | # MongoDB Atlas Search with .NET Blazor for Full-Text Search
Imagine being presented with a website with a large amount of data and not being able to search for what you want. Instead, you’re forced to sift through piles of results with no end in sight.
That is, of course, the last thing you want for yourself or your users. So in this tutorial, we’ll see how you can easily implement search with autocomplete in your .NET Blazor application using MongoDB Atlas Search.
Atlas Search is the easiest and fastest way to implement relevant searches into your MongoDB Atlas-backed applications, making it simpler for developers to focus on implementing other things.
## Prerequisites
In order to follow along with this tutorial, you will need a few things in place before you start:
- An IDE or text editor that can support C# and Blazor for the most seamless development experience, such as Visual Studio, Visual Studio Code with the C# DevKit Extension installed, and JetBrains Rider.
- An Atlas M0
cluster,
our free forever tier, perfect for development.
- The sample dataset
loaded into the
cluster.
- Your cluster connection
string for use in your application settings later on.
- A fork of the GitHub
repo that we
will be adding search to.
Once you have forked and then cloned the repo and have it locally, you will need to add your connection string into ```appsettings.Development.json``` and ```appsettings.json``` in the placeholder section in order to connect to your cluster when running the project.
> If you don’t want to follow along, the repo has a branch called “full-text-search” which has the final result implemented.
## Creating Atlas Search indexes
Before we can start adding Atlas Search to our application, we need to create search indexes inside Atlas. These indexes enable full-text search capabilities on our database. We want to specify what fields we wish to index.
Atlas Search does support dynamic indexes, which apply to all fields and adapt to any document shape changes. But for this tutorial, we are going to add a search index for a specific field, “title.”
1. Inside Atlas, click “Browse Collections” to open the data explorer to view your newly loaded sample data.
2. Select the “Atlas Search” tab at the top.
3. Click the green “Create Search Index” button to load the index creation wizard.
4. Select Visual Editor and then click “Next.”
5. Give your index a name. What you choose is up to you.
6. For “Database and Collection,” select “sample_mflix” to expand the database and select the “movies” collection. Then, click “Next.”
7. In the final review section, click the “Refine Your Index” button below the “Index Configurations” table as we want to make some changes.
8. Click “+ Add Field Mapping” about halfway down the page.
9. In “Field Name,” search for “title.”
10. For “Data Type,” select “Autocomplete.” This is because we want to have autocomplete available in our application so users can see results as they start typing.
11. Click the “Add” button in the bottom right corner.
12. Click “Save” and then “Create Search Index.”
After a few minutes, the search index will be set up and the application will be ready to be “searchified.”
If you prefer to use the JSON editor to simply copy and paste, you can use the following:
```json
{
"mappings": {
"dynamic": true,
"fields": {
"title": {
"type": "autocomplete"
}
}
}
}
```
## Implementing backend functionality
Now the database is set up to support Atlas Search with our new indexes, it's time to update the code in the application to support search. The code has an interface and service for talking to Atlas using the MongoDB C# driver which can be found in the ```Services``` folder.
### Adding a new method to IMongoDBService
First up is adding a new method for searching to the interface.
Open ```IMongoDBService.cs``` and add the following code:
```csharp
public IEnumerable MovieSearchByText (string textToSearch);
```
We return an IEnumerable of movie documents because multiple documents might match the search terms.
### Implementing the method in MongoDBService
Next up is adding the implementation to the service.
1. Open ```MongoDBService.cs``` and paste in the following code:
```csharp
public IEnumerable MovieSearchByText(string textToSearch)
{
// define fuzzy options
SearchFuzzyOptions fuzzyOptions = new SearchFuzzyOptions()
{
MaxEdits = 1,
PrefixLength = 1,
MaxExpansions = 256
};
// define and run pipeline
var movies = _movies.Aggregate().Search(Builders.Search.Autocomplete(movie => movie.Title,
textToSearch, fuzzy: fuzzyOptions), indexName: "title").Project(Builders.Projection
.Exclude(movie => movie.Id)).ToList();
return movies;
}
```
Replace the value for ```indexName``` with the name you gave your search index.
Fuzzy search allows for approximate matching to a search term which can be helpful with things like typos or spelling mistakes. So we set up some fuzzy search options here, such as how close to the right term the characters need to be and how many characters at the start that must exactly match.
Atlas Search is carried out using the $search aggregation stage, so we call ```.Aggregate()``` on the movies collection and then call the ``Search``` method.
We then pass a builder to the search stage to search against the title using our passed-in search text and the fuzzy options from earlier.
The ```.Project()``` stage is optional but we’re going to include it because we don’t use the _id field in our application. So for performance reasons, it is always good to exclude any fields you know you won’t need to be returned.
You will also need to make sure the following using statements are present at the top of the class for the code to run later:
```chsarp
using SeeSharpMovies.Models;
using MongoDB.Driver;
using MongoDB.Driver.Search;
```
Just like that, the back end is ready to accept a search term, search the collection for any matching documents, and return the result.
## Implementing frontend functionality
Now the back end is ready to accept our searches, it is time to implement it on the front end so users can search. This will be split into two parts: the code in the front end for talking to the back end, and the search bar in HTML for typing into.
### Adding code to handle search
This application uses razor pages which support having code in the front end. If you look inside ```Home.razor``` in the ```Components/Pages``` folder, you will see there is already some code there for requesting all movies and pagination.
1. Inside the ```@code``` block, underneath the existing variables, add the following code:
```csharp
string searchTerm;
Timer debounceTimer;
int debounceInterval = 200;
```
As expected, there is a string variable to hold the search term, but the other two values might not seem obvious. In development, where you are accepting input and then calling some kind of service, you want to avoid calling it too often. So you can implement something called *debounce* which handles that. You will see that implemented later but it uses a timer and an interval — in this case, 200 milliseconds.
2. Add the following code after the existing methods:
```csharp
private void SearchMovies()
{
if (string.IsNullOrWhiteSpace(searchTerm))
{
movies = MongoDBService.GetAllMovies();
}
else
{
movies = MongoDBService.MovieSearchByText(searchTerm);
}
}
void DebounceSearch(object state)
{
if (string.IsNullOrWhiteSpace(searchTerm))
{
SearchMovies();
}
else
{
InvokeAsync(() =>
{
SearchMovies();
StateHasChanged();
});
}
}
void OnSearchInput(ChangeEventArgs e)
{
searchTerm = e.Value.ToString();
debounceTimer?.Dispose();
debounceTimer = new Timer(DebounceSearch, null, debounceInterval, Timeout.Infinite);
}
```
SearchMovies: This method handles an empty search box as trying to search on nothing will cause it to error. So if there is nothing in the search box, it fetches all movies again. Otherwise, it calls the backend method we implemented previously to search by that term.
DebounceSearch: This calls search movies and if there is a search term available, it also tells the component that the stage has changed.
OnSearchInput: This will be called later by our search box but this is an event handler that says that when there is a change event, set the search term to the value of the box, reset the debounce timer, and start it again from the timer interval, passing in the ```DebounceSearch``` method as a callback function.
Now we have the code to smoothly handle receiving input and calling the back end, it is time to add the search box to our UI.
### Adding a search bar
Adding the search bar is really simple. We are going to add it to the header component already present on the home page.
After the link tag with the text “See Sharp Movies,” add the following HTML:
```html
```
## Testing the search functionality
Now we have the backend code available and the front end has the search box and a way to send the search term to the back end, it's time to run the application and see it in action.
Run the application, enter a search term in the box, and test the result.
## Summary
Excellent! You now have a Blazor application with search functionality added and a good starting point for using full-text search in your applications going forward.
If you want to learn more about Atlas Search, including more features than just autocomplete, you can take an amazing Atlas Search workshop created by my colleague or view the docs]https://www.mongodb.com/docs/manual/text-search/). If you have questions or feedback, join us in the [Community Forums.
| md | {
"tags": [
"C#",
".NET"
],
"pageDescription": "In this tutorial, learn how to add Atlas Search functionality with autocomplete and fuzzy search to a .NET Blazor application.",
"contentType": "Tutorial"
} | MongoDB Atlas Search with .NET Blazor for Full-Text Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/introducing-atlas-stream-processing-support-mongodb-vs-code-extension | created | # Introducing Atlas Stream Processing Support Within the MongoDB for VS Code Extension
Across industries, teams are building applications that need access to low-latency data to deliver compelling experiences and gain valuable business insights. Stream processing is a fundamental building block powering these applications. Stream processing lets developers discover and act on streaming data (data in motion), and combine that data when necessary with data at rest (data stored in a database). MongoDB is a natural fit for streaming data with its capabilities around storing and querying unstructured data and an effective query API. MongoDB Atlas Stream Processing is a service within MongoDB Atlas that provides native stream processing capabilities. In this article, you will learn how to use the MongoDB for VS Code extension to create and manage stream processors in MongoDB Atlas.
## Installation
MongoDB support for VS Code is provided by the MongoDB for VS Code extension. To install the MongoDB for VS Code extension, launch VS Code, open the Extensions view, and search for MongoDB to filter the results. Select the MongoDB for VS Code extension.
or visit the online documentation.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt666c23a3d692a93f/65ca49389778069713c044c0/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf7ffa77814bb0f50/65ca494dfaacae5fb31fbf4e/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltefc76078f205a62a/65ca495d8a7a51c5d10a6474/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt79292cfdc1f0850b/65ca496fdccfc6374daaf101/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta3bccb1ba48cdb6a/65ca49810ad0380459881a98/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa4cf4e38b4e9feb/65ca499676283276edc5e599/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt795a00f7ec1d1d12/65ca49ab08fffd1cdc721948/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0c505751ace8d8ad/65ca49bc08fffd774372194c/8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt214f210305140aa8/65ca49cc8a7a5127a00a6478/9.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf09aabfaf76e1907/65ca49e7f48bc2130d50eb36/10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5b7e455dfdb6982f/65ca49f80167d0582c8f8e88/11.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta8ed4dee7ddc3359/65ca4a0aedad33ddf7fae3ab/12.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt856ff7e440e2c786/65ca4a18862c423b4dfb5c91/13.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to use the MongoDB for VS Code extension to create and manage stream processors in MongoDB Atlas.",
"contentType": "Tutorial"
} | Introducing Atlas Stream Processing Support Within the MongoDB for VS Code Extension | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/mongodb-dataflow-templates-udf-enhancement | created | # UDF Announcement for MongoDB to BigQuery Dataflow Templates
Many enterprise customers using MongoDB Atlas as their core operational database also use BigQuery for their Batch and AI/ML based analytics, making it pivotal for seamless transfer of data between these entities. Since the announcement of the Dataflow templates (in October Of 2022) on moving data between MongoDB and BigQuery, we have seen a lot of interest from customers as it made it effortless for an append-only, one-to-one migration of data. Though the three Dataflow templates provided cater to most of the common use cases, there was also a demand to be able to do transformations as part of these templates.
We are excited to announce the addition of the ability to write your own user-defined functions (UDFs) in these Dataflow pipelines! This new feature allows you to use UDFs in JavaScript to transform and analyze data within BigQuery. With UDFs, you can define custom logic and business rules that can be applied to your data as it is being processed by Dataflow. This allows you to perform complex transformations like transforming fields, concatenating fields, deleting fields, converting embedded documents to separate documents, etc. These UDFs take unprocessed documents as input parameters and return the processed documents as output.
To use UDFs with BigQuery Dataflow, simply write your JavaScript function and store it in the Google cloud storage bucket. Use the Dataflow templates’ optional parameter to read these UDFs while running the templates. The function will be executed on the data as it is being processed, allowing you to apply custom logic and transformations to your data during the transfer.
## How to set it up
Let’s have a quick look at how to set up a sample UDF to process (transform a field, flatten an embedded document, and delete a field) from an input document before writing the processed data to BigQuery.
### Set up MongoDB
1. MongoDB Atlas setup through registration.
2. MongoDB Atlas setup through GCP Marketplace. (MongoDB Atlas is available pay as you go in the GC marketplace).
3. Create your MongoDB cluster.
4. Click on **Browse collections** and click on **+Create Database**.
5: Name your database **Sample_Company** and collection **Sample_Employee**.
6: Click on **INSERT DOCUMENT**.
Copy and paste the below document and click on **Insert**.
```
{
"Name":"Venkatesh",
"Address":{"Phone":{"$numberLong":"123455"},"City":"Honnavar"},
"Department":"Solutions Consulting",
"Direct_reporting": "PS"
}
```
7: To have authenticated access on the MongoDB Sandbox cluster from Google console, we need to create database users.
Click on the **Database Access** from the left pane on the Atlas Dashboard.
Choose to **Add New User** using the green button on the left. Enter the username `appUser` and password `appUser123`. We will use built-in roles; click **Add Default Privileges** and in the **Default Privileges** section, add the roles readWriteAnyDatabase. Then press the green **Add User** button to create the user.
8: Whitelist the IPs.
For the purpose of this demo, we will allow access from any ip, i.e 0.0.0.0/0. However, this is not recommended for a production setup, where the recommendation will be to use VPC Peering and private IPs.
### Set up Google Cloud
1. Create a cloud storage bucket.
2. On your local machine, create a Javascript file **transform.js** and add below sample code.
```
function transform(inputDoc) {
var outputDoc = new Object();
inputDoc"City"] = inputDoc["Address"]["City"];
delete doc.Address;
outputDoc = doc;
return returnObj;
}
```
This function will read the document read from MongoDB using the Apache beam MongoDB IO connector. Flatten the embedded document Address/City to City. Delete the Address field and return the updated document.
3: [Upload the javascript file to the Google Cloud storage bucket.
4: Create a BigQuery Dataset in your project in the region close to your physical location.
5: Create a Dataflow pipeline.
a. Click on the **Create Job from the template** button at the top.
b. Job Name: **mongodb-udf**.
c. Region: Same as your BigQuery dataset region.
d. MongoDB connection URI: Copy the connection URI for connecting applications from MongoDB Atlas.
e. MongoDB Database: **Sample_Company**.
f. MongoDB Collection: **Sample_Employee**.
g. BigQuery Destination Table: Copy the destination table link from the BigQuery
h. Dataset details page in format: bigquery-project:**sample_dataset.sample_company**.
i. User Option: **FLATTEN**.
j. Click on **show optional parameters**.
k. Cloud storage location of your Javascript UDF: Browse your UDF file loaded to bucket location. This is the new feature that allows running the UDF and applies the transformations before inserting into BigQuery.
l. Name of your Javascript function: **transform**.
6: Click on **RUN JOB** to start running the pipeline. Once the pipeline finishes running, your graph should show **Succeeded** on each stage as shown below.
7: After completion of the job, you will be able to see the transformed document inserted into BigQuery.
## Conclusion
In this blog, we introduced UDFs to MongoDB to BigQuery Dataflow templates and their capabilities to transform the documents read from MongoDB using custom user defined Javascript functions stored on Google Cloud storage buckets. This blog also includes a simple tutorial on how to set up MongoDB Atlas, Google Cloud, and the UDFs.
### Further reading
* A data pipeline for MongoDB Atlas and BigQuery using Dataflow.
* A data pipeline for MongoDB Atlas and BigQuery using the Confluent connector.
* Run analytics using BigQuery using BigQuery ML.
* Set up your first MongoDB cluster using Google Marketplace.
| md | {
"tags": [
"Atlas",
"JavaScript",
"AI"
],
"pageDescription": "Learn how to transform the MongoDB Documents using user-defined JavaScript functions in Dataflow templates.",
"contentType": "Tutorial"
} | UDF Announcement for MongoDB to BigQuery Dataflow Templates | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/improving-storage-read-performance-free-flat-vs-structured-schemas | created | # Improving Storage and Read Performance for Free: Flat vs Structured Schemas
When developers or administrators who had previously only been "followers of the word of relational data modeling" start to use MongoDB, it is common to see documents with flat schemas. This behavior happens because relational data modeling makes you think about data and schemas in a flat, two-dimensional structure called tables.
In MongoDB, data is stored as BSON documents, almost a binary representation of JSON documents, with slight differences. Because of this, we can create schemas with more dimensions/levels. More details about BSON implementation can be found in its specification. You can also learn more about its differences from JSON.
MongoDB documents are composed of one or more key/value pairs, where the value of a field can be any of the BSON data types, including other documents, arrays, or arrays of documents.
Using documents, arrays, or arrays of documents as values for fields enables the creation of a structured schema, where one field can represent a group of related information. This structured schema is an alternative to a flat schema.
Let's see an example of how to write the same `user` document using the two schemas:
.
- Documents with 10, 25, 50, and 100 fields were utilized for the flat schema.
- Documents with 2x5, 5x5, 10x5, and 20x5 fields were used for the structured schema, where 2x5 means two fields of type document with five fields for each document.
- Each collection had 10.000 documents generated using faker/npm.
- To force the MongoDB engine to loop through all documents and all fields inside each document, all queries were made searching for a field and value that wasn't present in the documents.
- Each query was executed 100 times in a row for each document size and schema.
- No concurrent operation was executed during each test.
Now, to the test results:
| **Documents** | **Flat** | **Structured** | **Difference** | **Improvement** |
| ------------- | -------- | -------------- | -------------- | --------------- |
| 10 / 2x5 | 487 ms | 376 ms | 111 ms | 29,5% |
| 25 / 5x5 | 624 ms | 434 ms | 190 ms | 43,8% |
| 50 / 10x5 | 915 ms | 617 ms | 298 ms | 48,3% |
| 100 / 20x5 | 1384 ms | 891 ms | 493 ms | 55,4% |
As our theory predicted, traversing a structured document is faster than traversing a flat one. The gains presented in this test shouldn't be considered for all cases when comparing structured and flat schemas, the improvements in traversing will depend on how the nested fields and documents are organized.
This article showed how to better use your MongoDB deployment by changing the schema of your document for the same data/information. Another option to extract more performance from your MongoDB deployment is to apply the common schema patterns of MongoDB. In this case, you will analyze which data you should put in your document/schema. The article Building with Patterns has the most common patterns and will significantly help.
The code used to get the above results is available in the GitHub repository.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0d2f0e3700c6e2ac/65b3f5ce655e30caf6eb9dba/schema-comparison.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte533958fd8753347/65b3f611655e30a264eb9dc4/image1.png | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to optimize the size of your documents within MongoDB by changing how you structure your schema.",
"contentType": "Article"
} | Improving Storage and Read Performance for Free: Flat vs Structured Schemas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/streamlining-cloud-native-development-gitpod-atlas | created | # Streamlining Cloud-Native Development with Gitpod and MongoDB Atlas
Developers are increasingly shifting from the traditional development model of writing code and testing the entire application stack locally to remote development environments that are more cloud-native. This allows them to have environments that are configurable as-code, are easily reproducible for any team member, and are quick enough to spin up and tear down that each pull request can have an associated ephemeral environment for code reviews.
As new platforms and services that developers use on a daily basis are more regularly provided as cloud-first or cloud-only offerings, it makes sense to leverage all the advantages of the cloud for the entire development lifecycle and have the development environment more effectively mirror the production environment.
In this blog, we’ll look at how Gitpod, with its Cloud Development Environment (CDE), is a perfect companion for MongoDB Atlas when it comes to a cloud-native development experience. We are so excited about the potential of this combined development experience that we invested in Gitpod’s most recent funding round.
As an example, let’s look at a simple Node.js application that exposes an API to retrieve quotes from popular authors. You can find the source code on Github. You should be able to try out the end-to-end setup yourself by going to Gitpod. The project is configured to use a free cluster in Atlas and, assuming you don’t have one already running in your Atlas account, everything should work out of the box.
The code for the application is straightforward and is mostly contained in app.js, but the most interesting part is how the Gitpod development environment is set up: With just a couple of configuration files added to the GitHub repository, **a developer who works on this project for the first time can have everything up and running, including the MongoDB cluster needed for development seeded with test data, in about 30 seconds!**
Let’s take a look at how that is possible.
We’ll start with the Dockerfile. Gitpod provides an out-of-the-box Docker image for the development environment that contains utilities and support for the most common programming languages. In our case, we prefer to start with a minimal image and add only what we need to it: the Atlas CLI (and the MongoDB Shell that comes with it) to manage resources in Atlas and Node.js.
```dockerfile
FROM gitpod/workspace-base:2022-09-07-02-19-02
# Install MongoDB Tooling
RUN sudo apt-get install gnupg
RUN wget -qO - https://pgp.mongodb.com/server-5.0.asc | sudo apt-key add -
RUN echo "deb arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
RUN sudo apt-get update
RUN sudo apt-get install -y mongodb-atlas
# Install Node 18
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
RUN sudo apt-get install -y nodejs
# Copy Atlas script
COPY mongodb-utils.sh /home/gitpod/.mongodb-utils.sh
RUN echo "source ~/.mongodb-utils.sh" >> .bash_aliases
```
To make things a little easier and cleaner, we’ll also add to the container a [mongodb-utils.sh file and load it into bash_aliases. It’s a bash script that contains convenience functions that wrap some of the Atlas CLI commands to make them easier to use within the Gitpod environment.
The second half of the configuration is contained in .gitpod.yml. This file may seem a little verbose, but what it does is relatively simple. Let’s take a closer look at these configuration details in the following sections of this article.
## Ephemeral cluster for development
Our Quotes API application uses MongoDB to store data: All the quotes with their metadata are in a MongoDB collection. Atlas is the best way to run MongoDB so we will be using that. Plus, because we are using Atlas, we can also take advantage of Atlas Search to offer full-text search capabilities to our API users.
Since we want our development environment to have characteristics that are compatible with what we’ll have in production, we will use Atlas for our development needs as well. In particular, we want to make sure that every time a developer starts a Gitpod environment, a corresponding ephemeral cluster is created in Atlas and seeded with test data.
With some simple configuration, Gitpod takes care of all of this in a fully automated way. The `atlas_up` script creates a cluster with the same name as the Gitpod workspace. This way, it’s easy to see what clusters are being used for development.
```bash
if ! -n "${MONGODB_ATLAS_PROJECT_ID+1}" ]; then
echo "\$MONGODB_ATLAS_PROJECT_ID is not set. Lets try to login."
if ! atlas auth whoami &> /dev/null ; then
atlas auth login --noBrowser
fi
fi
MONGODB_CONNECTION_STRING=$(atlas_up)
```
The script above is a little sophisticated as it takes care of opening the browser and logging you in with your Atlas account if it’s the first time you’ve set up Gitpod with this project. Once you are set up the first time, you can choose to generate API credentials and skip the login step in the future. The instructions on how to do that are in the [README file included in the repository.
## Development cluster seeded with sample data
When developing an application, it’s convenient to have test data readily available. In our example, the repository contains a zipped dataset in JSON format. During the initialization of the workspace, once the cluster is deployed, we connect to it with the MongoDB Shell (mongosh) and run a script that loads the unzipped dataset into the cluster.
```bash
unzip data/quotes.zip -d data
mongosh $MONGODB_CONNECTION_STRING data/_load.js
```
## Creating an Atlas Search index
As part of our Quotes API, we provide an endpoint to search for quotes based on their content or their author. With Atlas Search and the MongoDB Query API, it is extremely easy to configure full-text search for a given collection, and we’ll use that in our application.
As we want the environment to be ready to code, as part of the initialization, we also create a search index. For convenience, we included the `data/_create-search-index.sh` script that takes care of that by calling the `atlas cluster search index create command` and passing the right parameters to it.
## Cleaning things up
To make the cluster truly ephemeral and start with a clean state every time we start a new workspace, we want to make sure we terminate it once it is no longer needed.
For this example, we’ve used a free cluster, which is perfect for most development use cases. However, if you need better performance, you can always configure your environment to use a paid cluster (see the `--tier` option of the Atlas CLI). Should you choose to do so, it is even more important to terminate the cluster when it is no longer needed so you can avoid unnecessary costs.
To do that, we wait for the Gitpod environment to be terminated. That is what this section of the configuration file does:
```yml
tasks:
- name: Cleanup Atlas Cluster
command: |
atlas_cleanup_when_done
```
The `atlas_cleanup_when_done` script waits for the SIGTERM sent to the Gitpod container and, once it receives it, it sends a command to the Atlas CLI to terminate the cluster.
## End-to-end developer experience
During development, it is often useful to look at the data stored in MongoDB. As Gitpod integrates very well with VS Code, we can configure it so the MongoDB for VS Code extension is included in the setup.
This way, whoever starts the environment has the option of connecting to the Atlas cluster directly from within VS Code to explore their data, and test their queries. MongoDB for VS Code is also a useful tool to insert and edit data into your test database: With its Playground functionality, it is really easy to execute any CRUD operation, including scripting the insertion of fake test data.
As this is a JavaScript application, we also include the Standard VS Code extension for linting and code formatting.
```yml
vscode:
extensions:
- mongodb.mongodb-vscode
- standard.vscode-standard
```
## Conclusion
MongoDB Atlas is the ideal data platform across the entire development lifecycle. With Atlas, developers get a platform that is 100% compatible with production, including services like Atlas Search that runs next to the core database. And as developers shift towards Cloud Development Environments like Gitpod, they can get an even more sophisticated experience developing in the cloud with Atlas and always be ready to code. Check out the source code provided in this article and give MongoDB Atlas a try with Gitpod.
Questions? Comments? Head to the MongoDB Developer Community to join the conversation. | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "More developers are moving from local development to working in cloud-native, remote development environments. Together, MongoDB and Gitpod make a perfect pair for developers looking for this type of seamless cloud development experience.",
"contentType": "Tutorial"
} | Streamlining Cloud-Native Development with Gitpod and MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/leveraging-mongodb-atlas-vector-search-langchain | created | # Leveraging MongoDB Atlas Vector Search with LangChain
## Introduction to Vector Search in MongoDB Atlas
Vector search engines — also termed as vector databases, semantic search, or cosine search — locate the closest entries to a specified vectorized query. While the conventional search methods hinge on keyword references, lexical match, and the rate of word appearances, vector search engines measure similarity by the distance in the embedding dimension. Finding related data becomes searching for the nearest neighbors of your query.
Vector embeddings act as the numeric representation of data and its accompanying context, preserved in high-dimensional (dense) vectors. There are various models, both proprietary (like those from OpenAI and Hugging Face) and open-source ones (like FastText), designed to produce these embeddings. These models can be trained on millions of samples to deliver results that are both more pertinent and precise. In certain situations, the numeric data you've gathered or designed to showcase essential characteristics of your documents might serve as embeddings. The crucial part is to have an efficient search mechanism, like MongoDB Atlas.
), choose _Search_ and _Create Search Index_. Please also visit the official MongoDB documentation to learn more.
in your user settings.
To install LangChain, you'll first need to update pip for Python or npm for JavaScript, then use the respective install command. Here are the steps:
For Python version, use:
```
pip3 install pip --upgrade
pip3 install langchain
```
We will also need other Python modules, such as ``pymongo`` for communication with MongoDB Atlas, ``openai`` for communication with the OpenAI API, and ``pypdf` `and ``tiktoken`` for other functionalities.
```
pip3 install pymongo openai pypdf tiktoken
```
### Start using Atlas Vector Search
In our exercise, we utilize a publicly accessible PDF document titled "MongoDB Atlas Best Practices" as a data source for constructing a text-searchable vector space. The implemented Python script employs several modules to process, vectorize, and index the document's content into a MongoDB Atlas collection.
In order to implement it, let's begin by setting up and exporting the environmental variables. We need the Atlas connection string and the OpenAI API key.
```
export OPENAI_API_KEY="xxxxxxxxxxx"
export ATLAS_CONNECTION_STRING="mongodb+srv://user:passwd@vectorsearch.abc.mongodb.net/?retryWrites=true"
```
Next, we can execute the code provided below. This script retrieves a PDF from a specified URL, segments the text, and indexes it in MongoDB Atlas for text search, leveraging LangChain's embedding and vector search features. The full code is accessible on GitHub.
```
import os
from pymongo import MongoClient
from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import MongoDBAtlasVectorSearch
# Define the URL of the PDF MongoDB Atlas Best Practices document
pdf_url = "https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4HkJP"
# Retrieve environment variables for sensitive information
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
if not OPENAI_API_KEY:
raise ValueError("The OPENAI_API_KEY environment variable is not set.")
ATLAS_CONNECTION_STRING = os.getenv('ATLAS_CONNECTION_STRING')
if not ATLAS_CONNECTION_STRING:
raise ValueError("The ATLAS_CONNECTION_STRING environment variable is not set.")
# Connect to MongoDB Atlas cluster using the connection string
cluster = MongoClient(ATLAS_CONNECTION_STRING)
# Define the MongoDB database and collection names
DB_NAME = "langchain"
COLLECTION_NAME = "vectorSearch"
# Connect to the specific collection in the database
MONGODB_COLLECTION = clusterDB_NAME][COLLECTION_NAME]
# Initialize the PDF loader with the defined URL
loader = PyPDFLoader(pdf_url)
data = loader.load()
# Initialize the text splitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)
# Split the document into manageable segments
docs = text_splitter.split_documents(data)
# Initialize MongoDB Atlas vector search with the document segments
vector_search = MongoDBAtlasVectorSearch.from_documents(
documents=docs,
embedding=OpenAIEmbeddings(),
collection=MONGODB_COLLECTION,
index_name="default" # Use a predefined index name
)
# At this point, 'docs' are split and indexed in MongoDB Atlas, enabling text search capabilities.
```
Upon completion of the script, the PDF has been segmented and its vector representations are now stored within the ``langchain.vectorSearch`` namespace in MongoDB Atlas.
![embedding results][5]
### Execute similarities searching query in Atlas Vector Search
"`MongoDB Atlas auditing`" serves as our search statement for initiating similarity searches. By utilizing the `OpenAIEmbeddings` class, we'll generate vector embeddings for this phrase. Following that, a similarity search will be executed to find and extract the three most semantically related documents from our MongoDB Atlas collection that align with our search intent.
In the first step, we need to create a ``MongoDBAtlasVectorSearch`` object:
```
def create_vector_search():
"""
Creates a MongoDBAtlasVectorSearch object using the connection string, database, and collection names, along with the OpenAI embeddings and index configuration.
:return: MongoDBAtlasVectorSearch object
"""
vector_search = MongoDBAtlasVectorSearch.from_connection_string(
ATLAS_CONNECTION_STRING,
f"{DB_NAME}.{COLLECTION_NAME}",
OpenAIEmbeddings(),
index_name="default"
)
return vector_search
```
Subsequently, we can perform a similarity search.
```
def perform_similarity_search(query, top_k=3):
"""
This function performs a similarity search within a MongoDB Atlas collection. It leverages the capabilities of the MongoDB Atlas Search, which under the hood, may use the `$vectorSearch` operator, to find and return the top `k` documents that match the provided query semantically.
:param query: The search query string.
:param top_k: Number of top matches to return.
:return: A list of the top `k` matching documents with their similarity scores.
"""
# Get the MongoDBAtlasVectorSearch object
vector_search = create_vector_search()
# Execute the similarity search with the given query
results = vector_search.similarity_search_with_score(
query=query,
k=top_k,
)
return results
# Example of calling the function directly
search_results = perform_similarity_search("MongoDB Atlas auditing")
```
The function returns the most semantically relevant documents from a MongoDB Atlas collection that correspond to a specified search query. When executed, it will provide a list of documents that are most similar to the query "`MongoDB Atlas auditing`". Each entry in this list includes the document's content that matches the search along with a similarity score, reflecting how closely each document aligns with the intent of the query. The function returns the top k matches, which by default is set to 5 but can be specified for any number of top results desired. Please find the [code on GitHub.
## Summary
MongoDB Atlas Vector Search enhances AI applications by facilitating the embedding of vector data into MongoDB documents. It simplifies the creation of search indices and the execution of KNN searches through the ``$vectorSearch`` MQL stage, utilizing the Hierarchical Navigable Small Worlds algorithm for efficient nearest neighbor searches. The collaboration with LangChain leverages this functionality, contributing to more streamlined and powerful semantic search capabilities
. Harness the potential of MongoDB Atlas Vector Search and LangChain to meet your semantic search needs today!
In the next blog post, we will delve into LangChain Templates, a new feature set to enhance the capabilities of MongoDB Atlas Vector Search. Alongside this, we will examine the role of retrieval-augmented generation (RAG) in semantic search and AI development. Stay tuned for an in-depth exploration in our upcoming article!
Questions? Comments? We’d love to continue the conversation over in the Developer Community forum.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte7a2e75d0a8966e6/6553d385f1467608ae159f75/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdc7192b71b0415f1/6553d74b88cbdaf6aa8571a7/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfd79fc3b47ce4ad8/6553d77b38b52a4917584197/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta6bbbb7c921bb08c/65a1b3ecd2ebff119d6f491d/atlas-search-create-search-index.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt627e7a7dd7b1a208/6553d7b28c5bd6f5f8c993cf/4.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "Discover the integration of MongoDB Atlas Vector Search with LangChain, explored in Python in this insightful article. It highlights how advanced semantic search capabilities and high-dimensional embeddings revolutionize data retrieval. Understand the use of MongoDB Atlas' $vectorSearch operator and how Python enhances the functionality of LangChain in building AI-driven applications. This guide offers a comprehensive overview for harnessing these cutting-edge tools in data analysis and AI-driven search processes.",
"contentType": "Tutorial"
} | Leveraging MongoDB Atlas Vector Search with LangChain | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-spring-bulk-writes | created | # Implementing Bulk Writes using Spring Boot for MongoDB
## Introduction
The Spring Data Framework is used extensively in applications as it makes it easier to access different kinds of persistence stores. This article will show how to use Spring Data MongoDB to implement bulk insertions.
BulkOperations is an interface that contains a list of write operations to be applied to the database. They can be any combination of
`InsertOne`,
`updateOne`
`updateMany`
`replaceOne`
`deleteOne`
`deleteMany`
A bulkOperation can be ordered or unordered. Ordered operations will be applied sequentially and if an error is detected, will return with an error code. Unordered operations will be applied in parallel and are thus potentially faster, but it is the responsibility of the application to check if there were errors during the operations. For more information please refer to the bulk write operations section of the MongoDB documentation.
## Getting started
A POM file will specify the version of Spring Data that the application will use. Care must be taken to use a version of Spring Data that utilizes a compatible version of the MongoDB Java Driver. You can verify this compatibility in the MongoDB Java API documentation.
```
org.springframework.boot
spring-boot-starter-data-mongodb
2.7.2
```
## Application class
The top level class is a SpringBootApplication that implements a CommandLineRunner , like so:
```
@SpringBootApplication
public class SpringDataBulkInsertApplication implements CommandLineRunner {
@Value("${documentCount}")
private int count;
private static final Logger LOG = LoggerFactory
.getLogger(SpringDataBulkInsertApplication.class);
@Autowired
private CustomProductsRepository repository;
public static void main(String] args) {
SpringApplication.run(SpringDataBulkInsertApplication.class, args);
}
@Override
public void run(String... args) {
repository.bulkInsertProducts(count);
LOG.info("End run");
}
}
```
Now we need to write a few classes to implement our bulk insertion application.
## Configuration class
We will implement a class that holds the configuration to the MongoClient object that the Spring Data framework will utilize.
The `@Configuration` annotation will allow us to retrieve values to configure access to the MongoDB Environment. For a good explanation of Java-based configuration see [JavaConfig in the Spring reference documentation for more details.
```
@Configuration
public class MongoConfig {
@Value("${mongodb.uri}")
private String uri;
@Value("${mongodb.database}")
private String databaseName;
@Value("${truststore.path}")
private String trustStorePath;
@Value("${truststore.pwd}")
private String trustStorePwd;
@Value("${mongodb.atlas}")
private boolean atlas;
@Bean
public MongoClient mongo() {
ConnectionString connectionString = new ConnectionString(uri);
MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.applyToSslSettings(builder -> {
if (!atlas) {
// Use SSLContext if a trustStore has been provided
if (!trustStorePath.isEmpty()) {
SSLFactory sslFactory = SSLFactory.builder()
.withTrustMaterial(Paths.get(trustStorePath), trustStorePwd.toCharArray())
.build();
SSLContext sslContext = sslFactory.getSslContext();
builder.context(sslContext);
builder.invalidHostNameAllowed(true);
}
}
builder.enabled(true);
})
.build();
return MongoClients.create(mongoClientSettings);
}
@Bean
public MongoTemplate mongoTemplate() throws Exception {
return new MongoTemplate(mongo(), databaseName);
}
}
```
In this implementation we are using a flag, mongodb.atlas, to indicate that this application will connect to Atlas. If the flag is false, an SSL Context may be created using a trustStore, This presents a certificate for the root certificate authority in the form of a truststore file pointed to by truststore.path, protected by a password (`truststore.pwd`) at the moment of creation. If needed the client can also offer a keystore file, but this is not implemented.
The parameter mongodb.uri should contain a valid MongoDB URI. The URI contains the hosts to which the client connects, the user credentials, etcetera.
## The document class
The relationship between MongoDB collection and the documents that it contains is implemented via a class that is decorated by the @Document annotation. This class defines the fields of the documents and the annotation defines the name of the collection.
```
@Document("products")
public class Products {
private static final Logger LOG = LoggerFactory
.getLogger(Products.class);
@Id
private String id;
private String name;
private int qty;
private double price;
private Date available;
private Date unavailable;
private String skuId;
```
Setters and getters need to be defined for each field. The @Id annotation indicates our default index. If this field is not specified, MongoDB will assign an ObjectId value which will be unique.
## Repository classes
The repository is implemented with two classes, one an interface and the other the implementation of the interface. The repository classes flesh out the interactions of the application with the database. A method in the repository is responsible for the bulk insertion:
```
@Component
public class CustomProductsRepositoryImpl implements CustomProductsRepository {
private static final Logger LOG = LoggerFactory
.getLogger(CustomProductsRepository.class);
@Autowired
MongoTemplate mongoTemplate;
public int bulkInsertProducts(int count) {
LOG.info("Dropping collection...");
mongoTemplate.dropCollection(Products.class);
LOG.info("Dropped!");
Instant start = Instant.now();
mongoTemplate.setWriteConcern(WriteConcern.W1.withJournal(true));
Products [] productList = Products.RandomProducts(count);
BulkOperations bulkInsertion = mongoTemplate.bulkOps(BulkOperations.BulkMode.UNORDERED, Products.class);
for (int i=0; i | md | {
"tags": [
"Java",
"Spring"
],
"pageDescription": "Learn how to use Spring Data MongoDB to implement bulk insertions for your application",
"contentType": "Tutorial"
} | Implementing Bulk Writes using Spring Boot for MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/mongodb-atlas-with-terraform | created | # MongoDB Atlas with Terraform
In this tutorial, I will show you how to start using MongoDB Atlas with Terraform and create some simple resources. This first part is simpler and more introductory, but in the next article, I will explore more complex items and how to connect the creation of several resources into a single module. The tutorial is aimed at people who want to maintain their infrastructure as code (IaC) in a standardized and simple way. If you already use or want to use IaC on the MongoDB Atlas platform, this article is for you.
What are modules?
They are code containers for multiple resources that are used together. They serve several important purposes in building and managing infrastructure as code, such as:
1. Code reuse.
2. Organization.
3. Encapsulation.
4. Version management.
5. Ease of maintenance and scalability.
6. Sharing in the community.
Everything we do here is contained in the provider/resource documentation.
> Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as an S3, GCS, Azurerm, etc…
## Creating a project
In this first step, we will dive into the process of creating a project using Terraform. Terraform is a powerful infrastructure-as-code tool that allows you to manage and provision IT resources in an efficient and predictable way. By using it in conjunction with MongoDB Atlas, you can automate the creation and management of database resources in the cloud, ensuring a consistent and reliable infrastructure.
To get started, you'll need to install Terraform in your development environment. This step is crucial as it is the basis for running all the scripts and infrastructure definitions we will create. After installation, the next step is to configure Terraform to work with MongoDB Atlas. You will need an API key that has permission to create a project at this time.
To create an API key, you must:
1. Select **Access Manager** at the top of the page, and click **Organization Access**.
2. Click **Create API Key**.
![Organization Access Manager for your organization][1]
3. Enter a brief description of the API key and the necessary permission. In this case, I put it as Organization Owner. After that, click **Next**.
![Screen to create your API key][2]
4. Your API key will be displayed on the screen.
![Screen with information about your API key][3]
5. Release IP in the Access List (optional): If you have enabled your organization to use API keys, the requestor's IP must be released in the Access List; you must include your IP in this list. To validate whether it is enabled or not, go to **Organization Settings -> Require IP Access List** for the Atlas Administration API. In my case, it is disabled, as it is just a demonstration, but in case you are using this in an organization, I strongly advise you to enable it.
![Validate whether the IP Require Access List for APIs is enabled in Organization Settings][4]
After creating an API key, let's start working with Terraform. You can use the IDE of your choice; I will be using VS Code. Create the files within a folder. The files we will need at this point are:
- main.tf: In this file, we will define the main resource, `mongodbatlas_project`. Here, you will configure the project name and organization ID, as well as other specific settings, such as teams, limits, and alert settings.
- provider.tf: This file is where we define the provider we are using — in our case, `mongodbatlas`. Here, you will also include the access credentials, such as the API key.
- terraform.tfvars: This file contains the variables that will be used in our project — for example, the project name, team information, and limits, among others.
- variable.tf: Here, we define the variables mentioned in the terraform.tfvars file, specifying the type and, optionally, a default value.
- version.tf: This file is used to specify the version of Terraform and the providers we are using.
The main.tf file is the heart of our Terraform project. In it, you start with the data source declaration `mongodbatlas_roles_org_id` to obtain the `org_id`, which is essential for creating the project.
Next, you define the `mongodbatlas_project` resource with several settings. Here are some examples:
- `name` and `org_id` are basic settings for the project name and organization ID.
- Dynamic blocks are used to dynamically configure teams and limits, allowing flexibility and code reuse.
- Other settings, like `with_default_alerts_settings` and `is_data_explorer_enabled`, are options for customizing the behavior of your MongoDB Atlas project.
In the main.tf file, we will then add our project resource, called `mongodbatlas_project`.
```tf
data "mongodbatlas_roles_org_id" "org" {}
resource "mongodbatlas_project" "default" {
name = var.name
org_id = data.mongodbatlas_roles_org_id.org.org_id
dynamic "teams" {
for_each = var.teams
content {
team_id = teams.value.team_id
role_names = teams.value.role_names
}
}
dynamic "limits" {
for_each = var.limits
content {
name = limits.value.name
value = limits.value.value
}
}
with_default_alerts_settings = var.with_default_alerts_settings
is_collect_database_specifics_statistics_enabled = var.is_collect_database_specifics_statistics_enabled
is_data_explorer_enabled = var.is_data_explorer_enabled
is_extended_storage_sizes_enabled = var.is_extended_storage_sizes_enabled
is_performance_advisor_enabled = var.is_performance_advisor_enabled
is_realtime_performance_panel_enabled = var.is_realtime_performance_panel_enabled
is_schema_advisor_enabled = var.is_schema_advisor_enabled
}
```
In the provider file, we will define the provider we are using and the API key that will be used. As we are just testing, I will specify the API key as a variable that we will input into our code. However, when you are using it in production, you will not want to pass the API key in the code in exposed text, so it is possible to pass it through environment variables or even AWS secret manager.
```tf
provider "mongodbatlas" {
public_key = var.atlas_public_key
private_key = var.atlas_private_key
}
```
In the variable.tf file, we will specify the variables that we are waiting for a user to pass. As I mentioned earlier, the API key is an example.
```tf
variable "name" {
description = <= 0.12"
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
version = "1.14.0"
}
}
}
```
- `required_version = ">= 0.12"`: This line specifies that your Terraform project requires, at a minimum, Terraform version 0.12. By using >=, you indicate that any version of Terraform from 0.12 onward is compatible with your project. This offers some flexibility by allowing team members and automation systems to use newer versions of Terraform as long as they are not older than 0.12.
- `required_providers`: This section lists the providers required for your Terraform project. In your case, you are specifying the mongodbatlas provider.
- `source = "mongodb/mongodbatlas"`: This defines the source of the mongodbatlas provider. Here, mongodb/mongodbatlas is the official identifier of the MongoDB Atlas provider in the Terraform Registry.
- `version = "1.14.0":` This line specifies the exact version of the mongodbatlas provider that your project will use, which is version 1.14.0. Unlike Terraform configuration, where we specify a minimum version, here you are defining a provider-specific version. This ensures that everyone using your code will work with the same version of the provider, avoiding discrepancies and issues related to version differences.
Finally, we have the variable file that will be included in our code, .tfvars.
```tf
name = "project-test"
atlas_public_key = "YOUR PUBLIC KEY"
atlas_private_key = "YOUR PRIVATE KEY"
```
We are specifying the value of the name variable, which is the name of the project and the public/private key of our provider. You may wonder, "Where are the other variables that we specified in the main.tf and variable.tf files?" The answer is: These variables were specified with a default value within the variable.tf file — for example, the limits value:
```tf
variable "limits" {
description = < | md | {
"tags": [
"Atlas",
"Terraform"
],
"pageDescription": "Learn how to get started with organising your MongoDB deployment with Terraform, using code to build and maintain your infrastructure.",
"contentType": "Tutorial"
} | MongoDB Atlas with Terraform | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/python-data-access-layer | created | # Building a Python Data Access Layer
This tutorial will show you how to use some reasonably advanced Python techniques to wrap BSON documents in a way that makes them feel much more like Python objects and allows different ways to access the data within. It's the first in a series demonstrating how to build a Python data access layer for MongoDB.
## Coding with Mark?
This tutorial is loosely based on the first episode of a new livestream I host, called "Coding with Mark." I'm streaming on Wednesdays at 2 p.m. GMT (that's 9 a.m. ET or 6 a.m. PT, if you're an early riser!). If that time doesn't work for you, you can always catch up by watching the recordings!
For the first few episodes, you can follow along as I attempt to build a different kind of Pythonic data access layer, a library to abstract underlying database modeling changes from a hypothetical application. One of the examples I'll use later on in this series is a microblogging platform, along the lines of Twitter/X or Bluesky. In order to deal with huge volumes of data, various modeling techniques are required, and my library will attempt to find ways to make these data modeling choices invisible to the application, making it easier to develop while remaining possible to change the underlying data model.
I'm using some pretty advanced programming and metaprogramming techniques to hide away some quite clever functionality. It's going to be a good series whether you're looking to improve either your Python or your MongoDB skills.
If that doesn't sound exciting enough, I'm lining up some awesome guests from the Python community, and in the future, we may branch away from Python and into other strange and wonderful worlds.
## Why a data access layer?
In any well-architected application of a reasonable size, you'll usually find that the codebase is split into at least three areas of concern:
1. A presentation layer is concerned with formatting data for consumption by a client. This may generate web pages to be viewed by a person in a browser, but increasingly, this may be an API endpoint, either driving an app that runs on a user's computer (or within their browser) or providing data to other services within a broader service-based architecture. This layer is also responsible for receiving data from a client and parsing it into data that can be used by the business logic layer.
2. A business logic layer sits behind the presentation layer and provides the "brains" of an application, making decisions on what actions to take based on user requests or data input into the application.
3. The data access layer, where I'm going to be focusing, provides a layer of abstraction over the database. Its responsibility is to request data from the database and provide them in a usable form to the business logic layer, but also to take requests from the business logic layer and to appropriately store data in the database.
to work with documents. An ORM is an Object-Relational Mapper library and handles mapping between relational data in a tabular database and objects in your application.
## Why not an ODM?
Good question! Many great ODMs have been developed for MongoDB. ODM is short for "Object Document Mapper" and describes a type of library that attempts to map between MongoDB documents and your application objects. Just within the Python ecosystem, there is MongoEngine, ODMantic, PyMODM, and more recently, Beanie and Bunnet. The last two are more or less the same, but Beanie is built on asyncio and Bunnet is synchronous. We're especially big fans of Beanie at MongoDB, and because it's built on Pydantic, it works especially well with FastAPI.
On the other hand, most ODMs are essentially solving the same problem — abstracting away MongoDB's powerful query language to make it easier to read and write, and modeling document schemas as objects so that data can be directly serialized and deserialized between the application and MongoDB.
Once your data model becomes relatively sophisticated, however, if you're implementing one or more patterns to improve the performance and scalability of your application, the way your data is stored is not necessarily the way you logically think about it within your application.
On top of that, if you're working with a very large dataset, then data migration may not be feasible, meaning that different subsets of your data will be stored in different ways! A good data access layer should be able to abstract over these differences so that your application doesn't need to be rewritten each time you evolve your schema for one reason or another.
Am I just building another ODM? Well, yes, probably. I'm just a little reluctant to use the term because I think it comes along with some of the preconceptions I've mentioned here. If it is an ODM, it's one which will have a focus on the “M.”
And partly, I just think it's a fun thing to build. It's an experiment. Let's see if it works!
## Introducing DocBridge
You can check out the current library in the project's GitHub repo. At the time of writing, the README contains what could be described as a manifesto:
- Managing large amounts of data in MongoDB while keeping a data schema flexible is challenging.
- This ODM is not an active record implementation, mapping documents in the database directly into similar objects in code.
- This ODM is designed to abstract underlying documents, mapping potentially multiple document schemata into a shared object representation.
It should also simplify the evolution of documents in the database, automatically migrating individual documents' schemas either on-read or on-write.
- There should be "escape hatches" so that unforeseen mappings can be implemented, hiding away the implementation code behind hopefully reusable components.
## Starting a New Framework
I think that's enough waffle. Let's get started.
If you want to get a look at how this will all work once it all comes together, skip to the end, where I'll also show you how it can be used with PyMongo queries. For the moment, I'm going to dive right in and start implementing a class for wrapping BSON documents to make it easier to abstract away some of the details of the document structure. In later tutorials, I may start to modify the way queries are done, but at the moment, I just want to wrap individual documents.
I want to define classes that encapsulate data from the database, so let's call that class `Document`. At the moment, I just need it to store away an underlying "raw" document, which PyMongo (and Motor) both provide as dict implementations:
```python
class Document:
def __init__(self, doc, *, strict=False):
self._doc = doc
self._strict = strict
```
I've defined two parameters that are stored away on the instance: `doc` and `strict`. The first will hold the underlying BSON document so that it can be accessed, and `strict` is a boolean flag I'll explain below. In this tutorial, I'm mostly ignoring details of using PyMongo or Motor to access MongoDB — I'm just working with BSON document data as a plain old dict.
When a Document instance wraps a MongoDB document, if `strict` is `False`, then it will allow any field in the document to automatically be looked up as if it was a normal Python attribute of the Document instance that wraps it. If `strict` is `True`, then it won't allow this dynamic lookup.
So, if I have a MongoDB document that contains { 'name': 'Jones' }, then wrapping it with a Document will behave like this:
```python
>>> relaxed_doc = Document({ 'name': 'Jones' })
>>> relaxed_doc.name
"Jones"
>>> strict_doc = Document({ 'name': 'Jones' }, strict=True)
>>> strict_doc.name
Traceback (most recent call last):
File "", line 1, in
File ".../docbridge/__init__.py", line 33, in __getattr__
raise AttributeError(
AttributeError: 'Document' object has no attribute 'name'
```
The class doesn't do this magic attribute lookup by itself, though! To get that behavior, I'll need to implement `__getattr__`. This is a "magic" or "dunder" method that is automatically called by Python when an attribute is requested that is not actually defined on the instance or the class (or any of the superclasses). As a fallback, Python will call `__getattr__` if your class implements it and provide the name of the attribute that's been requested.
```python
def __getattr__(self, attr):
if not self._strict:
return self._docattr]
else:
raise AttributeError(
f"{self.__class__.__name__!r} object has no attribute {attr!r}"
)
```
This implements the logic I've described above (although it differs slightly from the code in [the repository because there were a couple of bugs in that!).
This is a neat way to make a dictionary look like an object and allows document fields to be looked up as if they were attributes. It does currently require those attribute names to be exactly the same as the underlying fields, though, and it only works at the top level of the document. In order to make the encapsulation more powerful, I need to be able to configure how data is looked up on a per-field basis. First, let's handle how to map an attribute to a different field name.
## Let's abstract field names
The first abstraction I'd like to implement is the ability to have a different field name in the BSON document to the one that's exposed by the Document object. Let's say I have a document like this:
```javascript
{
"cocktailName": "Old Fashioned"
}
```
The field name uses camelCase instead of the more idiomatic snake_case (which would be "cocktail_name" instead of "cocktailName"). At this point, I could change the field name with a MongoDB query, but that's both not very sensible (because it's not that important) and potentially may be controversial with other teams using the same database that may be more used to using camelCase names. So let's add the ability to explicitly map from one attribute name to a different field name in the wrapped document.
I'm going to do this using metaprogramming, but in this case, it doesn't require me to write a custom metaclass! Let's assume that I'm going to subclass `Document` to provide a specific mapping for cocktail recipe documents.
```python
class Cocktail(Document):
cocktail_name = Field(field_name="cocktailName")
```
This may look similar to some patterns you've seen used by other ODMs or with, say, a Django model. Under the hood, `Field` needs to implement the Descriptor Protocol so that we can intercept attribute lookup for `cocktail_name` on instances of the `Cocktail` class and return data contained in the underlying BSON document.
## The Descriptor Protocol
The name sounds highly technical, but all it really means is that I'm going to implement a couple of methods on `Field` so that Python can treat it differently in two different ways:
`__set_name__` is called by Python when the Field is attached to a class (in this case, the Cocktail class). It's called with, you guessed it, the name of the field — in this case, "cocktail_name."
`__get__` is called by Python whenever the attribute is looked up on a Cocktail instance. So in this case, if I had a Cocktail instance called `my_cocktail`, then accessing `cocktail.cocktail_name` will call Field.__get__() under the hood, providing the Field instance, and the class that the field is attached to as arguments. This allows you to return whatever you think should be returned by this attribute access — which is the underlying BSON document's "cocktailName" value.
Here's my implementation of `Field`. I've simplified it from the implementation in GitHub, but this implements everything I've described above.
```python
class Field:
def __init__(self, field_name=None):
"""
Initialize a Field attribute, mapping to an underlying BSON field.
field_name is the name of the underlying BSON field.
If field_name is None (the default), use the attribute name for lookup in the doc.
"""
self.field_name = None
def __set_name__(self, owner, name):
"""
Called by Python when this Field instance is attached to a class (the owner).
"""
self.name = name # this is the *attribute* name on the class.
# If no field_name was provided, then default to using the attribute
# name to look up the BSON field:
if self.field_name is None:
self.field_name = name
def __get__(self, ob, cls):
"""
Called by Python when this attribute is looked up on an instance of
the class it's attached to.
"""
try:
# Look up the BSON field and return it:
return ob._docself.field_name]
except KeyError as ke:
raise ValueError(
f"Attribute {self.name!r} is mapped to missing document property {self.field_name!r}."
) from ke
```
With the code above, I've implemented a Field object, which can be attached to a Document class. It gives you the ability to allow field lookups on the underlying BSON document, with an optional mapping between the attribute name and the underlying field name.
## Let's abstract document versioning
A very common pattern in MongoDB is the [schema versioning pattern, which is very important if you want to maintain the evolvability of your data. (This is a term coined by Martin Kleppmann in his book, Designing Data Intensive Applications.)
The premise is that over time, your document schema will need to change, either for efficiency reasons or just because your requirements have changed. MongoDB allows you to store documents with different structures within a single collection so a changing schema doesn't require you to change all of your documents in one go — which can be infeasible with very large datasets anyway.
Instead, the schema versioning pattern suggests that when your schema changes, as you update individual documents to the new structure, you update a field that specifies the schema version of each document.
For example, I might start with a document representing a person, like this:
```javascript
{
"name": "Mark Smith",
"schema_version": 1,
}
```
But eventually, I might realize that I need to break up the user's name:
```javascript
{
"full_name": "Mark Smith"
"first_name": "Mark",
"last_name": "Smith",
"schema_version": 2,
}
```
In this example, when I load a document from this collection, I won't know in advance whether it's version 1 or 2, so when I request the name of the person, it may be stored in "name" or "full_name" depending on whether the particular document has been upgraded or not.
For this, I've designed a different kind of "Field" descriptor, called a "FallthroughField." This one will take a list of field names and will attempt to look them up in turn. In this way, I can avoid checking the "schema_version" field in the underlying document, but it will still work with both older and newer documents.
`FallthroughField` looks like this:
```python
class Fallthrough:
def __init__(self, field_names: Sequencestr]) -> None:
self.field_names = field_names
def __get__(self, ob, cls):
for field_name in self.field_names: # loop through the field names until one returns a value.
try:
return ob._doc[field_name]
except KeyError:
pass
else:
raise ValueError(
f"Attribute {self.name!r} references the field names {', '.join([repr(fn) for fn in self.field_names])} which are not present."
)
def __set_name__(self, owner, name):
self.name = name
```
Obviously, changing a field name is a relatively trivial schema change. I have big plans for how I can use descriptors to abstract away lots of complexity in the underlying document model.
## What does it look like?
This tutorial has shown a lot of implementation code. Now, let me show you what it looks like to use this library in practice:
```python
import os
from docbridge import Document, Field, FallthroughField
from pymongo import MongoClient
collection = (
MongoClient(os.environ["MDB_URI"])
.get_database("docbridge_test")
.get_collection("people")
)
collection.delete_many({}) # Clean up any leftover documents.
# Insert a couple of sample documents:
collection.insert_many(
[
{
"name": "Mark Smith",
"schema_version": 1,
},
{
"full_name": "Mark Smith",
"first_name": "Mark",
"last_name": "Smith",
"schema_version": 2,
},
]
)
# Define a mapping for "person" documents:
class Person(Document):
version = Field("schema_version")
name = FallthroughField(
[
"name", # v1
"full_name", # v2
]
)
# This finds all the documents in the collection, but wraps each BSON document with a Person wrapper:
people = (Person(doc, None) for doc in collection.find())
for person in people:
print(
"Name:",
person.name,
) # The name (or full_name) of the underlying document.
print(
"Document version:",
person.version, # The schema_version field of the underlying document.
)
```
If you run this, it prints out the following:
```
$ python examples/why/simple_example.py
Name: Mark Smith
Document version: 1
Name: Mark Smith
Document version: 2
```
## Upcoming features
I'll be the first to admit that this was a long tutorial given that effectively, I've so far just written an object wrapper around a dictionary that can conduct some simple name remapping. But it's a great start for some of the more advanced features that are upcoming:
- The ability to automatically upgrade the data in a document when data is [calculated or otherwise written back to the database
- Recursive class definitions to ensure that you have the full power of the framework no matter how nested your data is
- The ability to transparently handle the subset and extended reference patterns to lazily load data from across documents and collections
- More advanced name remapping to build Python objects that feel like Python objects, on documents that may have dramatically different conventions
- Potentially some tools to help build complex queries against your data
But the _next_ thing to do is to take a step back from writing library code and do some housekeeping. I'm building a test framework to help test directly against MongoDB while having my test writes rolled back after every test, and I'm going to package and publish the docbridge library. You can check out the livestream recording where I attempt this, or you can wait for the accompanying tutorial, which will be written any day now.
I'm streaming on the MongoDB YouTube channel nearly every Tuesday, at 2 p.m. GMT! Come join me — it's always helpful to have more people spot the bugs I'm creating as I write the code!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd0476a5d51e59056/6579eacbda6bef79e4d28370/application-architecture.png | md | {
"tags": [
"MongoDB",
"Python"
],
"pageDescription": "Let's build an Object-Document Mapper with some reasonably advanced Python!",
"contentType": "Tutorial"
} | Building a Python Data Access Layer | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/schema-performance-evaluation | created | # Schema Performance Evaluation in MongoDB Using PerformanceBench
MongoDB is often incorrectly described as being schemaless. While it is true that MongoDB offers a level of flexibility when working with schema designs that traditional relational databases systems cannot match, as with any database system, the choice of schema design employed by an application built on top of MongoDB will still ultimately determine whether the application is able to meet its performance objectives and SLAs.
Fortunately, a number of design patterns (and corresponding anti-patterns) exist to help guide application developers design appropriate schemas for their MongoDB applications. A significant part of our role as developer advocates within the global strategic account team at MongoDB involves educating developers new to MongoDB on the use of these design patterns and how they differ from those they may have previously used working with relational database systems. My colleague, Daniel Coupal, contributed to a fantastic set of blog posts on the most common patterns and anti-patterns we see working with MongoDB.
Whilst schema design patterns provide a great starting point for guiding our design process, for many applications, there may come a point where it becomes unclear which one of a set of alternative designs will best support the application’s anticipated workloads. In these situations, a quote by Rear Admiral Grace Hopper that my manager, Rick Houlihan, made me aware of rings true:*“One accurate measurement is worth a thousand expert opinions.”*
In this article, we will explore using PerformanceBench, a Java framework application used by my team when evaluating candidate data models for a customer workload.
## PerformanceBench
PerformanceBench is a simple Java framework designed to allow developers to assess the relative performance of different database design patterns within MongoDB.
PerformanceBench defines its functionality in terms of ***models*** (the design patterns being assessed) and ***measures*** (the operations to be measured against each model). As an example, a developer may wish to assess the relative performance of a design based on having data spread across multiple collections and accessed using **$lookup** (join) aggregations, versus one based on a hierarchical model where related documents are embedded within each other. In this scenario, the models might be respectively referred to as *multi-collection* and *hierarchical*, with the "measures" for each being CRUD operations: *Create*, *Read*, *Update*, and *Delete*.
The framework allows Java classes to be developed that implement a defined interface known as “**SchemaTest**,” with one class for each model to be tested. Each **SchemaTest** class implements the functionality to execute the measures defined for that model and returns, as output, an array of documents with the results of the execution of each measure — typically timing data for the measure execution, plus any metadata needed to later identify the parameters used for the specific execution. PerformanceBench stores these returned documents in a MongoDB collection for later analysis and evaluation.
PerformanceBench is configured via a JSON format configuration file which contains an array of documents — one for each model being tested. Each model document in the configuration file contains a set of standard fields that are common across all models being tested, plus a set of custom fields specific to that model. Developers implementing **SchemaTest** model classes are free to include whatever custom parameters their testing of a specific model requires.
When executed, PerformanceBench uses the data in the configuration file to identify the implementing class for each model to be tested and its associated measures. It then instructs the implementing classes to execute a specified number of iterations of each measure, optionally using multiple threads to simulate multi-user/multi-client environments.
Full details of the **SchemaTest** interface and the format of the PerformanceBench JSON configuration file are provided in the GitHub readme file for the project.
The PerformanceBench source in Github was developed using IntelliJ IDEA 2022.2.3 with OpenJDK Runtime Environment Temurin-17.0.3+7 (build 17.0.3+7).
The compiled application has been run on Amazon Linux using OpenJDK 17.0.5 (2022-10-18 LTS - Corretto).
## Designing SchemaTest model classes: factors to consider
Other than the requirement to implement the SchemaTest interface, PerformanceBench gives model class developers wide latitude in designing their classes in whatever way is needed to meet the requirements of their test cases. However, there are some common considerations to take into account.
### Understand the intention of the SchemaTest interface methods
The **SchemaTest** interface defines the following four methods:
```java
public void initialize(JSONObject args);
```
```java
public String name();
```
```java
public void warmup(JSONObject args);
```
```java
public Document] executeMeasure(int opsToTest, String measure, JSONObject args);
```
```java
public void cleanup(JSONObject args);
```
The **initialize** method is intended to allow implementing classes to carry out any necessary steps prior to measures being executed. This could, for example, include establishing and verifying connection to the database, building or preparing a test data set, and/or removing the results of prior execution runs. PerformanceBench calls initialize immediately after instantiating an instance of the class, but before any measures are executed.
The **name** method should return a string name for the implementing class. Class implementers can set the returned value to anything that makes sense for their use case. Currently, PerformanceBench only uses this method to add context to logging messages.
The **warmup** method is called by PerformanceBench prior to any iterations of any measure being executed. It is designed to allow model class implementers to attempt to create an environment that accurately reflects the expected state of the database in real-life. This could, for example, include carrying out queries designed to seed the MongoDB cache with an appropriate working set of data.
The **executeMeasure** method allows PerformanceBench to instruct a model-implementing class to execute a defined number of iterations of a specified measure. Typically, the method implementation will contain a case statement redirecting execution to the code for each defined measure. However, there is no requirement to implement in that way. The return from this method should be an array of BSON **Document** objects containing the results of each test iteration. Implementers are free to include whatever fields are necessary in these documents to support the metrics their use case requires.
The **cleanup** method is called by PerformanceBench after all iterations of all measures have been executed by the implementing class and is designed primarily to allow test data to be deleted or reset ahead of future test executions. However, the method can also be used to execute any other post test-run functionality necessary for a given use case. This may, for example, include calculating average/mean/percentile execution times for a test run, or for cleanly disconnecting from a database.
### Execute measures using varying test data sets
When assessing a given model, it is important to measure the model’s performance against varying data sets. For example, the following can all impact the performance of different search and data manipulation operations:
* Overall database and collection sizes
* Individual document sizes
* Available CPU and memory on the MongoDB servers being used
* Total number of documents within individual collections.
Executing a sequence of measures using different test data sets can help to identify if there is a threshold beyond which one model may perform better than another. It may also help to identify the amount of memory needed to store the working set of data necessary for the workload being tested to avoid excessive paging. Model-implementing classes should ensure that they add sufficient metadata to the results documents they generate to allow the conditions of the test to be identified during later analysis.
### Ensure queries are supported by appropriate indexes
As with most databases, query performance in MongoDB is dependent on appropriate indexes existing on collections being queried. Model class implementers should ensure any such indexes needed by their test cases either exist or are created during the call to their classes’ **initialize** method. Index size compared with available cache memory should be considered, and often, finding the point at which performance is negatively impacted by paging of indexes is a major objective of PerformanceBench testing.
### Remove variables such as network latency
With any testing regime, one goal should be to limit the number of variables potentially impacting performance discrepancies between test runs so differences in measured performance can be attributed with confidence to the intentional differences in test conditions. Items that come under this heading include network latency between the server running PerformanceBench and the MongoDB cluster servers. When working with MongoDB Atlas in a cloud environment, for example, specifying dedicated rather than shared servers can help avoid background load on the servers impacting performance, whilst deploying all servers in the same availability zone/region can reduce potential impacts from varying network latency.
### Model multi-user environments realistically
PerformanceBench allows measures to be executed concurrently in multiple threads to simulate a multi-user environment. However, if making use of this facility, put some thought into how to accurately model real user behavior. It is rare, for example, for users to execute a complex ad-hoc aggregation pipeline and immediately execute another on its completion. Your model class may therefore want to insert a delay between execution of measure iterations to attempt to model a realistic length of time you may expect between query requests from an individual user in a realistic production environment.
## APIMonitor: an example PerformanceBench model implementation
The PerformaceBench GitHub repository includes example model class implementations for a hypothetical application designed to report on success and failure rates of calls to a set of APIs monitored by observability software.
Data for the application is stored in two document types in two different collections.
The **APIDetails** collection contains one document for each monitored API with metadata about that API:
```json
{
"_id": "api#9",
"apiDetails": {
"appname": "api#9",
"platform": "Linux",
"language": {
"name": "Java",
"version": "11.8.202"
},
"techStack": {
"name": "Springboot",
"version": "UNCATEGORIZED"
},
"environment": "PROD"
},
"deployments": {
"region": "UK",
"createdAt": {
"$date": {
"$numberLong": "1669164599000"
}
}
}
}
```
The second collection, **APIMetrics**, is designed to represent the output from monitoring software with one document generated for each API at 15-minute intervals, giving the total number of calls to the API, the number that were successful, and the number that failed:
```json
{
"_id": "api#1#S#2",
"appname": "api#1",
"creationDate": {
"$date": {
"$numberLong": "1666909520000"
}
},
"transactionVolume": 54682,
"errorCount": 33302,
"successCount": 21380,
"region": "TK",
"year": 2022,
"monthOfYear": 10,
"dayOfMonth": 27,
"dayOfYear": 300
}
```
The documents include a deployment region value for each API (one of “Tokyo,” “Hong Kong,” “India,” or “UK”). The sample model classes in the repository are designed to compare the performance of options for running aggregation pipelines that calculate the total number of calls, the overall success rate, and the corresponding failure rate for all the APIs in a given region, for a given time period.
Four approaches are evaluated:
1. Carrying out an aggregation pipeline against the **APIDetails** collection that includes a **$lookup** stage to perform a join with and summarization of relevant data in the **APIMetrics** collection.
2. Carrying out an initial query against the **APIDetails** collection to produce a list of the API ids for a given region and use that list as input to an **$in** clause as part of a **$match** stage in a separate aggregation pipeline against the APIMetrics collection to summarize the relevant monitoring data.
3. A third approach that uses an equality clause on the region information in each document as part of the initial **$match** stage of a pipeline against the APIMetrics collection to summarize the relevant monitoring data. This approach is designed to test whether an equality match against a single value performs better than one using an **$in** clause with a large number of possible values, as used in the second approach. Two measures are implemented in this model: one that queries the two collections sequentially using the standard MongoDB Java driver, and one that queries the two collections in parallel using the MongoDB [Java Reactive Streams driver.
4. A fourth approach that adds a third collection called **APIPreCalc** that stores documents with pre-calculated total calls, total failed calls, and total successful calls for each API for each complete day, month, and year in the data set, with the aim of reducing the number of documents and size of calculations the aggregation pipeline has to execute. This model is an example implementation of the Computed schema design pattern and also uses the MongoDB Java Reactive Streams driver to query the collections in parallel.
For the fourth approach, the pre-computed documents in the **APIPreCalc** collection look like the following:
```json
{
"_id": "api#379#Y#2022",
"transactionVolume": 166912052,
"errorCount": 84911780,
"successCount": 82000272,
"region": "UK",
"appname": "api#379",
"metricsCount": {
"$numberLong": "3358"
},
"year": 2022,
"type": "year_precalc",
"dateTag": "2022"
},
{
"_id": "api#379#Y#2022#M#11",
"transactionVolume": 61494167,
"errorCount": 31247475,
"successCount": 30246692,
"region": "UK",
"appname": "api#379",
"metricsCount": {
"$numberLong": "1270"
},
"year": 2022,
"monthOfYear": 11,
"type": "month_precalc",
"dateTag": "2022-11"
},
{
"_id": "api#379#Y#2022#M#11#D#19",
"transactionVolume": 4462897,
"errorCount": 2286438,
"successCount": 2176459,
"region": "UK",
"appname": "api#379",
"metricsCount": {
"$numberLong": "96"
},
"year": 2022,
"monthOfYear": 11,
"dayOfMonth": 19,
"type": "dom_precalc",
"dateTag": "2022-11-19"
}
```
Note the **type** field in the documents used to differentiate between totals for a year, month, or day of month.
For the purposes of showing how PerformanceBench organizes models and measures, in the PerformanceBench GitHub repository, the first and second approaches are implemented as two separate **SchemaTest** model classes, each with a single measure, while the third and fourth approaches are implemented in a third **SchemaTest** model class with two measures — one for each approach.
### APIMonitorLookupTest class
The first model, implementing the **$lookup approach**, is implemented in package **com.mongodb.devrel.pods.performancebench.models.apimonitor_lookup** in a class named **APIMonitorLookupTest**.
The aggregation pipeline implemented by this approach is:
```json
{
$match: {
"deployments.region": "HK",
},
},
{
$lookup: {
from: "APIMetrics",
let: {
apiName: "$apiDetails.appname",
},
pipeline: [
{
$match: {
$expr: {
$and: [
{
$eq: ["$apiDetails.appname", "$$apiName"],
},
{
$gte: [
"$creationDate", ISODate("2022-11-01"),
],
},
],
},
},
},
{
$group: {
_id: "apiDetails.appName",
totalVolume: {
$sum: "$transactionVolume",
},
totalError: {
$sum: "$errorCount",
},
totalSuccess: {
$sum: "$successCount",
},
},
},
{
$project: {
aggregatedResponse: {
totalTransactionVolume: "$totalVolume",
errorRate: {
$cond: [
{
$eq: ["$totalVolume", 0],
},
0,
{
$multiply: [
{
$divide: [
"$totalError",
"$totalVolume",
],
},
100,
],
},
],
},
successRate: {
$cond: [
{
$eq: ["$totalVolume", 0],
},
0,
{
$multiply: [
{
$divide: [
"$totalSuccess",
"$totalVolume",
],
},
100,
],
},
],
},
},
_id: 0,
},
},
],
as: "results",
},
},
]
```
The pipeline is executed against the **APIDetails** collection and is run once for each of the four geographical regions. The **$lookup** stage of the pipeline contains its own sub-pipeline which is executed against the **APIMetrics** collection once for each API belonging to each region.
This results in documents looking like the following being produced:
```json
{
"_id": "api#100",
"apiDetails": {
"appname": "api#100",
"platform": "Linux",
"language": {
"name": "Java",
"version": "11.8.202"
},
"techStack": {
"name": "Springboot",
"version": "UNCATEGORIZED"
},
"environment": "PROD"
},
"deployments": [
{
"region": "HK",
"createdAt": {
"$date": {
"$numberLong": "1649399685000"
}
}
}
],
"results": [
{
"aggregatedResponse": {
"totalTransactionVolume": 43585837,
"errorRate": 50.961542851637795,
"successRate": 49.038457148362205
}
}
]
}
```
One document will be produced for each API in each region. The model implementation records the total time taken (in milliseconds) to generate all the documents for a given region and returns this in a results document to PerformanceBench. The results documents look like:
```json
{
"_id": {
"$oid": "6389b6581a3cd92944057c6c"
},
"startTime": {
"$numberLong": "1669962059685"
},
"duration": {
"$numberLong": "1617"
},
"model": "APIMonitorLookupTest",
"measure": "USEPIPELINE",
"region": "HK",
"baseDate": {
"$date": {
"$numberLong": "1667260800000"
}
},
"apiCount": 189,
"metricsCount": 189,
"threads": 3,
"iterations": 1000,
"clusterTier": "M10",
"endTime": {
"$numberLong": "1669962061302"
}
}
```
As can be seen, as well as the region, start time, end time, and duration of the execution run, the result documents also include:
* The model name and measure executed (in this case, **‘USEPIPELINE’**).
* The number of APIs (**apiCount**) found for this region, and number of APIs for which metrics were able to be generated (**metricsCount**). These numbers should always match and are included as a sanity check that data was generated correctly by the measure.
* The number of **threads** and **iterations** used for the execution of the measure. PerformanceBench allows measures to be executed a defined number of times (iterations) to allow a good average to be determined. Executions can also be run in one or more concurrent threads to simulate multi-user/multi-client environments. In the above example, three threads each concurrently executed 1,000 iterations of the measure (3,000 total iterations).
* The MongoDB Atlas cluster tier on which the measures were executed. This is simply used for tracking purposes when analyzing the results and could be set to any value by the class developer. In the sample class implementations, the value is set to match a corresponding value in the PerformanceBench configuration file. Importantly, it remains the user’s responsibility to ensure the cluster tier being used matches what is written to the results documents.
* **baseDate** indicates the date period for which monitoring data was summarized. For a given **baseDate**, the summarized period is always **baseDate** to the current date (inclusive). An earlier **baseDate** will therefore result in more data being summarized.
With a single measure defined for the model, and with three threads each carrying out 1,000 iterations of the measure, an array of 3,000 results documents will be returned by the model class to PerformanceBench. PerformanceBench then writes these documents to a collection for later analysis.
To support the aggregation pipeline, the model implementation creates the following indexes in its **initialize** method implementation:
**APIDetails: {"deployments.region": 1}**
**APIMetrics: {"appname": 1, "creationDate": 1}**
The model temporarily drops any existing indexes on the collection to avoid contention for memory cache space. The above indexes are subsequently dropped in the model’s **cleanup** method implementation, and all original indexes restored.
### APIMonitorMultiQueryTest class
The second model carries out an initial query against the **APIDetails** collection to produce a list of the API ids for a given region and then uses that list as input to an **$in** clause as part of a **$match** stage in an aggregation pipeline against the **APIMetrics** collection. It is implemented in package **com.mongodb.devrel.pods.performancebench.models.apimonitor_multiquery** in a class named **APIMonitorMultiQueryTest**.
The initial query, carried out against the **APIDetails** collection, looks like:
```
db.APIDetails.find("deployments.region": "HK")
```
This query is carried out for each of the four regions in turn and, from the returned documents, a list of the APIs belonging to each region is generated. The generated list is then used as the input to a **$in** clause in the **$match** stage of the following aggregation pipeline run against the APIMetrics collection:
```
[
{
$match: {
"apiDetails.appname": {$in: ["api#1", "api#2", "api#3"]},
creationDate: {
$gte: ISODate("2022-11-01"),
},
},
},
{
$group: {
_id: "$apiDetails.appname",
totalVolume: {
$sum: "$transactionVolume",
},
totalError: {
$sum: "$errorCount",
},
totalSuccess: {
$sum: "$successCount",
},
},
},
{
$project: {
aggregatedResponse: {
totalTransactionVolume: "$totalVolume",
errorRate: {
$cond: [
{
$eq: ["$totalVolume", 0],
},
0,
{
$multiply: [
{
$divide: ["$totalError", "$totalVolume"],
},
100,
],
},
],
},
successRate: {
$cond: [
{
$eq: ["$totalVolume", 0],
},
0,
{
$multiply: [
{
$divide: [
"$totalSuccess",
"$totalVolume",
],
},
100,
],
},
],
},
},
},
},
]
```
This pipeline is essentially the same as the sub-pipeline in the **$lookup** stage of the aggregation used by the **APIMonitorLookupTest** class, the main difference being that this pipeline returns the summary documents for all APIs in a region using a single execution, whereas the sub-pipeline is executed once per API as part of the **$lookup** stage in the **APIMonitorLookupTest** class. Note that the pipeline shown above has only three API values listed in its **$in** clause. In reality, the list generated during testing was between two and three hundred items long for each region.
When the documents are returned from the pipeline, they are merged with the corresponding API details documents retrieved from the initial query to create a set of documents equivalent to those created by the pipeline in the **APIMonitorLookupTest** class. From there, the model implementation creates the same summary documents to be returned to and saved by PerformanceBench.
To support the pipeline, the model implementation creates the following indexes in its **initialize** method implementation:
**APIDetails: {"deployments.region": 1}**
**APIMetrics: {"appname": 1, "creationDate": 1}**
As with the **APIMonitorLookupTest** class, this model temporarily drops any existing indexes on the collections to avoid contention for memory cache space. The above indexes are subsequently dropped in the model’s **cleanup** method implementation, and all original indexes restored.
### APIMonitorRegionTest class
The third model class, **com.mongodb.devrel.pods.performancebench.models.apimonitor_regionquery.APIMonitorRegionTest**, implements two measures, both similar to the measure in **APIMonitorMultiQueryTest**, but where the **$in** clause in the **$match** stage is replaced with a equivalency check on the **”region”** field. The purpose of these measures is to assess whether an equivalency check against the region field provides any performance benefit versus an **$in** clause where the list of matching values could be several hundred items long. The difference between the two measures in this model, named **“QUERYSYNC”** and **“QUERYASYNC”** respectively, is that the first performs the initial find query against the **APIDetails** collection, and then the aggregation pipeline against the **APIMetrics** collection in sequence, whilst the second model uses the [Reactive Streams MongoDB Driver to carry out the two operations in parallel to assess whether that provides any performance benefit.
With these changes, the match stage of the aggregation pipeline for this model looks like:
```json
{
$match: {
"deployments.region": "HK",
creationDate: {
$gte: ISODate("2022-11-01"),
},
},
}
```
In all other regards, the pipeline and the subsequent processes for creating summary documents to pass back to PerformanceBench are the same as those used in **APIMonitorMultiQueryTest**.
### APIMonitorPrecomputeTest class
The fourth model class, **com.mongodb.devrel.pods.performancebench.models.apimonitor_precompute.APIMonitorPrecomputeTest**, implements a single measure named **“PRECOMPUTE”**. This measure makes use of a third collection named **APIPreCalc** that contains precalculated summary data for each API for each complete day, month, and year in the data set. The intention with this measure is to assess what, if any, performance gain can be obtained by reducing the number of documents and resulting calculations the aggregation pipeline is required to carry out.
The measure calculates complete days, months, and years between the **baseDate** specified in the configuration file, and the current date. The total number of calls, failed calls and successful calls for each API for each complete day, month, or year is then retrieved from **APIPreCalc**. A **$unionWith** stage in the pipeline is then used to combine these values with the metrics for the partial days at either end of the period (the basedate and current date) retrieved from **APIMetrics**.
The pipeline used for this measure looks like:
```json
{
"$match": {
"region": "UK",
"dateTag": {
"$in": [
"2022-12",
"2022-11-2",
"2022-11-3",
"2022-11-4",
"2022-11-5",
"2022-11-6",
"2022-11-7",
"2022-11-8",
"2022-11-9",
"2022-11-10"
]
}
}
},
{
"$unionWith": {
"coll": "APIMetrics",
"pipeline": [
{
"$match": {
"$expr": {
"$or": [
{
"$and": [
{
"$eq": [
"$region",
"UK"
]
},
{
"$eq": [
"$year", 2022
]
},
{
"$eq": [
"$dayOfYear",
305
]
},
{
"$gte": [
"$creationDate",
{
"$date": "2022-11-01T00:00:00Z"
}
]
}
]
},
{
"$and": [
{
"$eq": [
"$region",
"UK"
]
},
{
"$eq": [
"$year",
2022
]
},
{
"$eq": [
"$dayOfYear",
315
]
},
{
"$lte": [
"$creationDate",
{
"$date": "2022-11-11T01:00:44.774Z"
}
]
}
]
}
]
}
}
}
]
}
},
{
"$group": {
…
}
},
{
"$project": {
…
}
}
]
```
The **$group** and **$project** stages are identical to the prior models and are not shown above.
To support the queries and carried out by the pipeline, the model creates the following indexes in its **initialize** method implementation:
**APIDetails: {"deployments.region": 1}**
**APIMetrics: {"region": 1, "year": 1, "dayOfYear": 1, "creationDate": 1}**
**APIPreCalc: {"region": 1, "dateTag": 1}**
### Controlling PerformanceBench execution — config.json
The execution of PerformanceBench is controlled by a configuration file in JSON format. The name and path to this file is passed as a command line argument using the **-c** flag. In the PerformanceBench GitHub repository, the file is called **config.json**:
```json
{
"models": [
{
"namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_lookup",
"className": "APIMonitorLookupTest",
"measures": ["USEPIPELINE"],
"threads": 2,
"iterations": 500,
"resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"resultsCollectionName": "apimonitor_results",
"resultsDBName": "performancebenchresults",
"custom": {
"uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"apiCollectionName": "APIDetails",
"metricsCollectionName": "APIMetrics",
"precomputeCollectionName": "APIPreCalc",
"dbname": "APIMonitor",
"regions": ["UK", "TK", "HK", "IN" ],
"baseDate": "2022-11-01T00:00:00.000Z",
"clusterTier": "M40",
"rebuildData": false,
"apiCount": 1000
}
},
{
"namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_multiquery",
"className": "APIMonitorMultiQueryTest",
"measures": ["USEINQUERY"],
"threads": 2,
"iterations": 500,
"resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"resultsCollectionName": "apimonitor_results",
"resultsDBName": "performancebenchresults",
"custom": {
"uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"apiCollectionName": "APIDetails",
"metricsCollectionName": "APIMetrics",
"precomputeCollectionName": "APIPreCalc",
"dbname": "APIMonitor",
"regions": ["UK", "TK", "HK", "IN" ],
"baseDate": "2022-11-01T00:00:00.000Z",
"clusterTier": "M40",
"rebuildData": false,
"apiCount": 1000
}
},
{
"namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_regionquery",
"className": "APIMonitorRegionQueryTest",
"measures": ["QUERYSYNC","QUERYASYNC"],
"threads": 2,
"iterations": 500,
"resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"resultsCollectionName": "apimonitor_results",
"resultsDBName": "performancebenchresults",
"custom": {
"uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"apiCollectionName": "APIDetails",
"metricsCollectionName": "APIMetrics",
"precomputeCollectionName": "APIPreCalc",
"dbname": "APIMonitor",
"regions": ["UK", "TK", "HK", "IN" ],
"baseDate": "2022-11-01T00:00:00.000Z",
"clusterTier": "M40",
"rebuildData": false,
"apiCount": 1000
}
},
{
"namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_precompute",
"className": "APIMonitorPrecomputeTest",
"measures": ["PRECOMPUTE"],
"threads": 2,
"iterations": 500,
"resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"resultsCollectionName": "apimonitor_results",
"resultsDBName": "performancebenchresults",
"custom": {
"uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
"apiCollectionName": "APIDetails",
"metricsCollectionName": "APIMetrics",
"precomputeCollectionName": "APIPreCalc",
"dbname": "APIMonitor",
"regions": ["UK", "TK", "HK", "IN" ],
"baseDate": "2022-11-01T00:00:00.000Z",
"clusterTier": "M40",
"rebuildData": false,
"apiCount": 1000
}
}
]
}
```
The document contains a single top-level field called “models,” the value of which is an array of sub-documents, each of which describes a model and its corresponding measures to be executed. PerformanceBench attempts to execute the models and measures in the order they appear in the file.
For each model, the configuration file defines the Java class implementing the model and its measures, the number of concurrent threads there should be executing each measure, the number of iterations of each measure each thread should execute, an array listing the names of the measures to be executed, and the connection URI, database name, and collection name where PerformanceBench should write results documents.
Additionally, there is a “custom” sub-document for each model where model class implementers can add any parameters specific to their model implementations. In the case of the **APIMonitor** class implementations, this includes the connection URI, database name and collection names where the test data resides, an array of acronyms for the geographic regions, the base date from which monitoring data should be summarized (summaries are based on values for **baseDate** to the current date, inclusive), and the Atlas cluster tier on which the tests were run (this is included in the results documents to allow comparison of performance of different tiers). The custom parameters also include a flag indicating if the test data set should be rebuilt before any of the measures for a model are executed and, if so, how many APIs data should be built for. The data rebuild code included in the sample model implementations builds data for the given number of APIs with the data for each API starting from a random date within the last 90 days.
### Summarizing results of the APIMonitor tests
By having PerformanceBench save the results of each test to a MongoDB collection, we are able to carry out analysis of the results in a variety of ways. The [MongoDB aggregation framework includes over 20 different available stages and over 150 available expressions allowing enormous flexibility in performing analysis, and if you are using MongoDB Atlas, you have access to Atlas Charts, allowing you to quickly and easily visually display and analyze the data in a variety of chart formats.
For analyzing larger data sets, the MongoDB driver for Python or Connector for Apache Spark could be considered.
The output from one simulated test run generated the following results:
#### Test setup
Note that the AWS EC2 server used to run PerformanceBench was located within the same AWS availability zone as the MongoDB Atlas cluster in order to minimize variations in measurements due to variable network latency.
The above conditions resulted in a total of 20,000 results documents being written by PerformanceBench to MongoDB (five measures, executed 500 times for each of four geographical regions, by two threads). Atlas Charts was used to display the results:
A further aggregation pipeline was then run on the results to find, for each measure, run by each model:
* The shortest iteration execution time
* The longest iteration execution time
* The mean iteration execution time
* The 95 percentile execution time
* The number of iterations completed per second.
The pipeline used was:
```json
{
$group: {
_id: {
model: "$model",
measure: "$measure",
region: "$region",
baseDate: "$baseDate",
threads: "$threads",
iterations: "$iterations",
clusterTier: "$clusterTier",
},
max: {
$max: "$duration",
},
min: {
$min: "$duration",
},
mean: {
$avg: "$duration",
},
stddev: {
$stdDevPop: "$duration",
}
},
},
{
$project: {
model: "$_id.model",
measure: "$_id.measure",
region: "$_id.region",
baseDate: "$_id.baseDate",
threads: "$_id.threads",
iterations: "$_id.iterations",
clusterTier: "$_id.clusterTier",
max: 1,
min: 1,
mean: {
$round: ["$mean"],
},
"95th_Centile": {
$round: [
{
$sum: [
"$mean",
{
$multiply: ["$stddev", 2],
},
],
},
],
},
throuput: {
$round: [
{
$divide: [
"$count",
{
$divide: [
{
$subtract: ["$end", "$start"],
},
1000,
],
},
],
},
2,
],
},
_id: 0,
},
},
]
```
This produced the following results:
![Table of summary results
As can be seen, the pipelines using the **$lookup** stage and the equality searches on the **region** values in APIMetrics performed significantly slower than the other approaches. In the case of the **$lookup** based pipeline, this was most likely because of the overhead of marshaling one call to the sub-pipeline within the lookup for every API (1,000 total calls to the sub-pipeline for each iteration), rather than one call per geographic region (four calls total for each iteration) in the other approaches. With two threads each performing 500 iterations of each measure, this would mean marshaling 1,000,000 calls to the sub-pipeline with the **$lookup** approach as opposed to 4,000 calls for the other measures.
If verification of the results indicated they were accurate, this would be a good indicator that an approach that avoided using a **$lookup** aggregation stage would provide better query performance for this particular use case. In the case of the pipelines with the equality clause on the region field (**QUERYSYNC** and **QUERYASYNC**), their performance was likely impacted by having to sort a large number of documents by **APIID** in the **$group** stage of their pipeline. In contrast, the pipeline using the **$in** clause (**USEINQUERY**) utilized an index on the **APPID** field, meaning documents were returned to the pipeline already sorted by **APPID** — this likely gave it enough of an advantage during the **$group** stage of the pipeline for it to consistently complete the stage faster. Further investigation and refinement of the indexes used by the **QUERYSYNC** and **QUERYASYNC** measures could reduce their performance deficit.
It’s also noticeable that the precompute model was between 25 and 40 times faster than the other approaches. By using the precomputed values for each API, the number of documents the pipeline needed to aggregate was reduced from as much as 96,000, to, at most, 1,000 for each full day being measured, and from as much as 2,976,000 to, at most, 1,000 for each complete month being measured. This has a significant impact on throughput and underlies the value of the computed schema design pattern.
## Final thoughts
PerformanceBench provides a quick way to organize, create, execute, and record the results of tests to measure how different schema designs perform when executing different workloads. However, it is important to remember that the accuracy of the results will depend on how well the implemented model classes simulate the real life access patterns and workloads they are intended to model.
Ensuring the models accurately represent the workloads and schemas being measured is the job of the implementing developers, and PerformanceBench can only provide the framework for executing those models. It cannot improve or provide any guarantee that the results it records are an accurate prediction of an application’s real world performance.
**Finally, it is important to understand that PerformanceBench, while free to download and use, is not in any way endorsed or supported by MongoDB.**
The repository for PerformanceBench can be found on Github. The project was created in IntelliJ IDEA using Gradle.
| md | {
"tags": [
"MongoDB",
"Java"
],
"pageDescription": "Learn how to use PerformanceBench, a Java-based framework application, to carry out empirical performance comparisons of schema design patterns in MongoDB.",
"contentType": "Tutorial"
} | Schema Performance Evaluation in MongoDB Using PerformanceBench | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-terraform-aws | created | # How to Deploy MongoDB Atlas with Terraform on AWS
**MongoDB Atlas** is the multi-cloud developer data platform that provides an integrated suite of cloud database and data services. We help to accelerate and simplify how you build resilient and performant global applications on the cloud provider of your choice.
**HashiCorp Terraform** is an Infrastructure-as-Code (IaC) tool that lets you define cloud resources in human-readable configuration files that you can version, reuse, and share. Hence, we built the **Terraform MongoDB Atlas Provider** that automates infrastructure deployments by making it easy to provision, manage, and control Atlas configurations as code on any of the three major cloud providers.
In addition, teams can also choose to deploy MongoDB Atlas through the MongoDB Atlas CLI (Command-Line Interface), Atlas Administration API, AWS CloudFormation, and as always, with the Atlas UI (User Interface).
In this blog post, we will learn how to deploy MongoDB Atlas hosted on AWS using Terraform. In addition, we will explore how to use Private Endpoints with AWS Private Link to provide increased security with private connectivity for your MongoDB Atlas cluster without exposing traffic to the public internet.
We designed this Quickstart for beginners with no experience with MongoDB Atlas, HashiCorp Terraform, or AWS and seeking to set up their first environment. Feel free to access all source code described below from this GitHub repo.
Let’s get started:
## Step 1: Create a MongoDB Atlas account
Sign up for a free MongoDB Atlas account, verify your email address, and log into your new account.
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
## Step 2: Generate MongoDB Atlas API access keys
Once you have an account created and are logged into MongoDB Atlas, you will need to generate an API key to authenticate the Terraform MongoDB Atlas Provider.
Go to the top of the Atlas UI, click the **Gear Icon** to the right of the organization name you created, click **Access Manager** in the lefthand menu bar, click the **API Keys** tab, and then click the green **Create API Key** box.
Enter a description for the API key that will help you remember what it’s being used for — for example “Terraform API Key.” Next, you’ll select the appropriate permission for what you want to accomplish with Terraform. Both the Organization Owner and Organization Project Creator roles (see role descriptions below) will provide access to complete this task, but by using the principle of least privilege, let’s select the Organization Project Creator role in the dropdown menu and click Next.
Make sure to copy your private key and store it in a secure location. After you leave this page, your full private key will **not**be accessible.
## Step 3: Add API Key Access List entry
MongoDB Atlas API keys have specific endpoints that require an API Key Access List. Creating an API Key Access List ensures that API calls must originate from IPs or CIDR ranges given access. As a good refresher, learn more about cloud networking.
On the same page, scroll down and click **Add Access List Entry**. If you are unsure of the IP address that you are running Terraform on (and you are performing this step from that machine), simply click **Use Current IP Address** and **Save**. Another option is to open up your IP Access List to all, but this comes with significant potential risk. To do this, you can add the following two CIDRs: **0.0.0.0/1** and **128.0.0.0/1**. These entries will open your IP Access List to at most 4,294,967,296 (or 2^32) IPv4 addresses and should be used with caution.
## Step 4: Set up billing method
Go to the lefthand menu bar and click **Billing** and then **Add Payment Method**. Follow the steps to ensure there is a valid payment method for your organization. Note when creating a free (forever) or M0 cluster tier, you can skip this step.
## Step 5: Install Terraform
Go to the official HashiCorp Terraform downloads page and follow the instructions to set up Terraform on the platform of your choice. For the purposes of this demo, we will be using an Ubuntu/Debian environment.
## Step 6: Defining the MongoDB Atlas Provider with environment variables
We will need to configure the MongoDB Atlas Provider using the MongoDB Atlas API Key you generated earlier (Step 2). We will be securely storing these secrets as Environment Variables.
First, go to the terminal window and create Environment Variables with the below commands. This prevents you from having to hard-code secrets directly into Terraform config files (which is not recommended):
```
export MONGODB_ATLAS_PUBLIC_KEY=""
export MONGODB_ATLAS_PRIVATE_KEY=""
```
Next, create in an empty directory with an empty file called **provider.tf**. Here we will input the following code to define the MongoDB Atlas Provider. This will automatically grab the most current version of the Terraform MongoDB Atlas Provider.
```
# Define the MongoDB Atlas Provider
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
required_version = ">= 0.13"
}
```
## Step 7: Creating variables.tf, terraform.tfvars, and main.tf files
We will now create a **variables.tf** file to declare all the Terraform variables used as part of this exercise and all of which are of type string. Next, we’ll define values (i.e. our secrets) for each of these variables in the **terraform.tfvars** file. Note as with most secrets, best practice is not to upload them (or the **terraform.tfvars** file itself) to public repos.
```
# Atlas Organization ID
variable "atlas_org_id" {
type = string
description = "Atlas Organization ID"
}
# Atlas Project Name
variable "atlas_project_name" {
type = string
description = "Atlas Project Name"
}
# Atlas Project Environment
variable "environment" {
type = string
description = "The environment to be built"
}
# Cluster Instance Size Name
variable "cluster_instance_size_name" {
type = string
description = "Cluster instance size name"
}
# Cloud Provider to Host Atlas Cluster
variable "cloud_provider" {
type = string
description = "AWS or GCP or Azure"
}
# Atlas Region
variable "atlas_region" {
type = string
description = "Atlas region where resources will be created"
}
# MongoDB Version
variable "mongodb_version" {
type = string
description = "MongoDB Version"
}
# IP Address Access
variable "ip_address" {
type = string
description = "IP address used to access Atlas cluster"
}
```
The example below specifies to use the most current MongoDB version (as of this writing), which is 6.0, a M10 cluster tier which is great for a robust development environment and will be deployed on AWS in the US\_WEST\_2 Atlas region. For specific details about all the available options besides M10 and US\_WEST\_2, please see the documentation.
```
atlas_org_id = ""
atlas_project_name = "myFirstProject"
environment = "dev"
cluster_instance_size_name = "M10"
cloud_provider = "AWS"
atlas_region = "US_WEST_2"
mongodb_version = "6.0"
ip_address = ""
```
Next, create a **main.tf** file, which we will populate together to create the minimum required resources to create and access your cluster: a MongoDB Atlas Project (Step 8), Database User/Password (Step 9), IP Access List (Step 10), and of course, the MongoDB Atlas Cluster itself (Step 11). We will then walk through how to create Terraform Outputs (Step 12) so you can access your Atlas cluster and then create a Private Endpoint with AWS PrivateLink (Step 13). If you are already familiar with any of these steps, feel free to skip ahead.
Note: As infrastructure resources get created, modified, or destroyed, several more files will be generated in your directory by Terraform (for example the **terraform.tfstate** file). It is best practice not to modify these additional files directly unless you know what you are doing.
## Step 8: Create MongoDB Atlas project
MongoDB Atlas Projects helps to organize and provide granular access controls to our resources inside our MongoDB Atlas Organization. Note MongoDB Atlas “Groups” and “Projects” are synonymous terms.
To create a Project using Terraform, we will need the **MongoDB Atlas Organization ID** with at least the Organization Project Creator role (defined when we created the MongoDB Atlas Provider API Keys in Step 2).
To get this information, simply click on **Settings** on the lefthand menu bar in the Atlas UI and click the copy icon next to Organization ID. You can now paste this information as the atlas\_org\_id in your **terraform.tfvars** file.
Next in the **main.tf** file, we will use the resource **mongodbatlas\_project** from the Terraform MongoDB Atlas Provider to create our Project. To do this, simply input:
```
# Create a Project
resource "mongodbatlas_project" "atlas-project" {
org_id = var.atlas_org_id
name = var.atlas_project_name
}
```
## Step 9: Create MongoDB Atlas user/password
To authenticate a client to MongoDB, like the MongoDB Shell or your application code using a MongoDB Driver (officially supported in Python, Node.js, Go, Java, C#, C++, Rust, and several others), you must add a corresponding Database User to your MongoDB Atlas Project. See the documentation for more information on available user roles so you can customize the user’s RBAC (Role Based Access Control) as per your team’s needs.
For now, simply input the following code as part of the next few lines in the **main.tf** file to create a Database User with a random 16-character password. This will use the resource **mongodbatlas_database_user** from the Terraform MongoDB Atlas Provider. The database user_password is a sensitive secret, so to access it, you will need to input the “**terraform output -json user_password**” command in your terminal window after our deployment is complete to reveal.
```
# Create a Database User
resource "mongodbatlas_database_user" "db-user" {
username = "user-1"
password = random_password.db-user-password.result
project_id = mongodbatlas_project.atlas-project.id
auth_database_name = "admin"
roles {
role_name = "readWrite"
database_name = "${var.atlas_project_name}-db"
}
}
# Create a Database Password
resource "random_password" "db-user-password" {
length = 16
special = true
override_special = "_%@"
}
```
## Step 10: Create IP access list
Next, we will create the IP Access List by inputting the following into your **main.tf** file. Be sure to lookup the IP address (or CIDR range) of the machine where you’ll be connecting to your MongoDB Atlas cluster from and paste it into the **terraform.tfvars** file (as shown in code block in Step 7). This will use the resource **mongodbatlas_project_ip_access_list** from the Terraform MongoDB Atlas Provider.
```
# Create Database IP Access List
resource "mongodbatlas_project_ip_access_list" "ip" {
project_id = mongodbatlas_project.atlas-project.id
ip_address = var.ip_address
}
```
## Step 11: Create MongoDB Atlas cluster
We will now use the **mongodbatlas_advanced_cluster** resource to create a MongoDB Atlas Cluster. With this resource, you can not only create a deployment, but you can manage it over its lifecycle, scaling compute and storage independently, enabling cloud backups, and creating analytics nodes.
In this example, we group three database servers together to create a replica set with a primary server and two secondary replicas duplicating the primary's data. This architecture is primarily designed with high availability in mind and can automatically handle failover if one of the servers goes down — and recover automatically when it comes back online. We call all these nodes *electable* because an election is held between them to work out which one is primary.
We will also set the optional *backup_enabled* flag to true. This provides increased data resiliency by enabling localized backup storage using the native snapshot functionality of the cluster's cloud service provider (see documentation).
Lastly, we create one *analytics* node. Analytics nodes are read-only nodes that can exclusively be used to execute database queries on. That means that this analytics workload is isolated to this node only so operational performance isn't impacted. This makes analytic nodes ideal to run longer and more computationally intensive analytics queries on without impacting your replica set performance (see documentation).
```
# Create an Atlas Advanced Cluster
resource "mongodbatlas_advanced_cluster" "atlas-cluster" {
project_id = mongodbatlas_project.atlas-project.id
name = "${var.atlas_project_name}-${var.environment}-cluster"
cluster_type = "REPLICASET"
backup_enabled = true
mongo_db_major_version = var.mongodb_version
replication_specs {
region_configs {
electable_specs {
instance_size = var.cluster_instance_size_name
node_count = 3
}
analytics_specs {
instance_size = var.cluster_instance_size_name
node_count = 1
}
priority = 7
provider_name = var.cloud_provider
region_name = var.atlas_region
}
}
}
```
## Step 12: Create Terraform outputs
You can output information from your Terraform configuration to the terminal window of the machine executing Terraform commands. This can be especially useful for values you won’t know until the resources are created, like the random password for the database user or the connection string to your Atlas cluster deployment. The code below in the **main.tf** file will output these values to the terminal display for you to use after Terraform completes.
```
# Outputs to Display
output "atlas_cluster_connection_string" { value = mongodbatlas_advanced_cluster.atlas-cluster.connection_strings.0.standard_srv }
output "ip_access_list" { value = mongodbatlas_project_ip_access_list.ip.ip_address }
output "project_name" { value = mongodbatlas_project.atlas-project.name }
output "username" { value = mongodbatlas_database_user.db-user.username }
output "user_password" {
sensitive = true
value = mongodbatlas_database_user.db-user.password
}
```
## Step 13: Set up a private endpoint to your MongoDB Atlas cluster
Increasingly, we see our customers want their data to traverse only private networks. One of the best ways to connect to Atlas over a private network from AWS is to use AWS PrivateLink, which establishes a one-way connection that preserves your perceived network trust boundary while eliminating additional security controls associated with other options like VPC peering (Azure Private Link and GCP Private Service Connect are supported, as well). Learn more about AWS Private Link with MongoDB Atlas.
To get started, we will need to first **Install the AWS CLI**. If you have not already done so, also see AWS Account Creation and AWS Access Key Creation for more details.
Next, let’s go to the terminal and create AWS Environment Variables with the below commands (similar as we did in Step 6 with our MongoDB Atlas credentials). Use the same region as above, except we will use the AWS naming convention instead, i.e., “us-west-2”.
```
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
export AWS_DEFAULT_REGION=""
```
Then, we add the AWS provider to the **provider.tf** file. This will enable us to now deploy AWS resources from the **Terraform AWS Provider** in addition to MongoDB Atlas resources from the **Terraform MongoDB Atlas Provider** directly from the same Terraform config files.
```
# Define the MongoDB Atlas and AWS Providers
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
aws = {
source = "hashicorp/aws"
}
}
required_version = ">= 0.13"
}
```
We now add a new entry in our **variables.tf** and **terraform.tfvars** files for the desired AWS region. As mentioned earlier, we will be using “us-west-2” which is the AWS region in Oregon, USA.
**variables.tf**
```
# AWS Region
variable "aws_region" {
type = string
description = "AWS Region"
}
```
**terraform.tfvars**
```
aws_region = "us-west-2"
```
Next we create two more files for each of the new types of resources to be deployed: **aws-vpc.tf** to create a full network configuration on the AWS side and **atlas-pl.tf** to create both the Amazon VPC Endpoint and the MongoDB Atlas Endpoint of the PrivateLink. In your environment, you may already have an AWS network created. If so, then you’ll want to include the correct values in the **atlas-pl.tf** file and won’t need **aws-vpc.tf** file. To get started quickly, we will simply git clone them from our repo.
After that we will use a Terraform Data Source and wait until the PrivateLink creation is completed so we can get the new connection string for the PrivateLink connection. In the **main.tf**, simply add the below:
```
data "mongodbatlas_advanced_cluster" "atlas-cluser" {
project_id = mongodbatlas_project.atlas-project.id
name = mongodbatlas_advanced_cluster.atlas-cluster.name
depends_on = mongodbatlas_privatelink_endpoint_service.atlaseplink]
}
```
Lastly, staying in the **main.tf** file, we add the below additional output code snippet in order to display the [Private Endpoint-Aware Connection String to the terminal:
```
output "privatelink_connection_string" {
value = lookup(mongodbatlas_advanced_cluster.atlas-cluster.connection_strings0].aws_private_link_srv, aws_vpc_endpoint.ptfe_service.id)
}
```
## Step 14: Initializing Terraform
We are now all set to start creating our first MongoDB Atlas deployment!
Open the terminal console and type the following command: **terraform init** to initialize Terraform. This will download and install both the Terraform AWS and MongoDB Atlas Providers (if you have not done so already).
![terraform init command, Terraform has been successfully initialized!
## Step 15: Review Terraform deployment
Next, we will run the **terraform plan** command. This will output what Terraform plans to do, such as creation, changes, or destruction of resources. If the output is not what you expect, then it’s likely an issue in your Terraform configuration files.
## Step 16: Apply the Terraform configuration
Next, use the **terraform apply** command to deploy the infrastructure. If all looks good, input **yes** to approve terraform build.
**Success!**
Note new AWS and MongoDB Atlas resources can take \~15 minutes to provision and the provider will continue to give you a status update until it is complete. You can also check on progress through the Atlas UI and AWS Management Console.
The connection string shown in the output can be used to access (including performing CRUD operations) on your MongoDB database via the MongoDB Shell, MongoDB Compass GUI, and Data Explorer in the UI (as shown below). Learn more about how to interact with data in MongoDB Atlas with the MongoDB Query Language (MQL). As a pro tip, I regularly leverage the MongoDB Cheat Sheet to quickly reference key commands.
Lastly, as a reminder, the database user_password is a sensitive secret, so to access it, you will need to input the “**terraform output -json user_password**” command in your terminal window to reveal.
## Step 17: Terraform destroy
Feel free to explore more complex environments (including code examples for deploying MongoDB Atlas Clusters from other cloud vendors) which you can find in our public repo examples. When ready to delete all infrastructure created, you can leverage the **terraform destroy command**.
Here the resources we created earlier will all now be terminated. If all looks good, input **yes**:
After a few minutes, we are back to an empty slate on both our MongoDB Atlas and AWS environments. It goes without saying, but please be mindful when using the terraform destroy command in any kind of production environment.
The HashiCorp Terraform MongoDB Atlas Provider is an open source project licensed under the Mozilla Public License 2.0 and we welcome community contributions. To learn more, see our contributing guidelines. As always, feel free to contact us with any issues.
Happy Terraforming with MongoDB Atlas on AWS! | md | {
"tags": [
"Atlas",
"AWS",
"Terraform"
],
"pageDescription": "A beginner’s guide to start deploying Atlas clusters today with Infrastructure as Code best practices",
"contentType": "Tutorial"
} | How to Deploy MongoDB Atlas with Terraform on AWS | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/autocomplete-atlas-search-nextjs | created | # Adding Autocomplete To Your NextJS Applications With Atlas Search
## Introduction
Imagine landing on a webpage with thousands of items and you have to scroll through all of them to get what you are looking for. You will agree that it's a bad user experience. For such a website, the user might have to leave for an alternative website, which I'm sure is not what any website owner would want.
Providing users with an excellent search experience, such that they can easily search for what they want to see, is crucial for giving them a top-notch user experience.
The easiest way to incorporate rich, fast, and relevant searches into your applications is through MongoDB Atlas Search, a component of MongoDB Atlas.
## Explanation of what we will be building
In this guide, I will be showing you how I created a text search for a home rental website and utilize Atlas Search to integrate full-text search functionality, also incorporating autocomplete into the search box.
This search will give users the ability to search for homes by country.
Let's look at the technology we will be using in this project.
## Overview of the technologies and tools that will be used
If you'd like to follow along, here's what I'll be using.
### Frontend framework
NextJS: We will be using NextJS to build our front end. NextJS is a popular JavaScript framework for building server-rendered React applications.
I chose this framework because it provides a simple setup and helps with optimizations such as automatic code splitting and optimized performance for faster load times. Additionally, it has a strong community and ecosystem, with a large number of plugins and examples available, making it an excellent choice for building both small- and large-scale web applications.
### Backend framework
NodeJS and ExpressJS: We will be using these to build our back end. Both are used together for building server-side web applications.
I chose these frameworks because Node.js is an open-source, cross-platform JavaScript runtime environment for building fast, scalable, and high-performance server-side applications. Express.js, on the other hand, is a popular minimal and flexible Node.js web application framework that provides a robust set of features for building web and mobile applications.
### Database service provider
MongoDB Atlas is a fully managed cloud database service provided by MongoDB. It's a cloud-hosted version of the popular NoSQL database (MongoDB) and offers automatic scalability, high availability, and built-in security features. With MongoDB Atlas, developers can focus on building their applications rather than managing the underlying infrastructure, as the service takes care of database setup, operation, and maintenance.
### MongoDB Atlas Search
MongoDB Atlas Search is a full-text search and analytics engine integrated with MongoDB Atlas. It enables developers to add search functionality to their applications by providing fast and relevant search results, including text search and faceted search, and it also supports autocomplete and geospatial search.
MongoDB Atlas Search is designed to be highly scalable and easy to use.
## Pre-requisites
The full source of this application can be found on Github.
## Project setup
Let's get to work!
### Setting up our project
To start with, let's clone the repository that contains the starting source code on Github.
```bash
git clone https://github.com/mongodb-developer/search-nextjs/
cd search-nextjs
```
Once the clone is completed, in this repository, you will see two sub-folders:
`mdbsearch`: Contains the Nextjs project (front end)
`backend`: Contains the Node.js project (back end)
Open the project with your preferred text editor. With this done, let's set up our MongoDB environment.
### Setting up a MongoDB account
To set up our MongoDB environment, we will need to follow the below instructions from the MongoDB official documentation.
- Sign Up for a Free MongoDB Account
- Create a Cluster
- Add a Database User
- Configure a Network Connection
- Load Sample Data
- Get Connection String
Your connection string should look like this: mongodb+srv://user:
### Identify a database to work with
We will be working with the `sample-airbnb` sample data from MongoDB for this application because it contains appropriate entries for the project.
If you complete the above steps, you should have the sample data loaded in your cluster. Otherwise, check out the documentation on how to load sample data.
## Start the Node.js backend API
The API for our front end will be provided by the Node.js back end. To establish a connection to your database, let's create a `.env` file and update it with the connection string.
```bash
cd backend
npm install
touch .env
```
Update .env as below
```bash
PORT=5050
MONGODB_URI=
```
To start the server, we can either utilize the node executable or, for ease during the development process, use `nodemon`. This tool can automatically refresh your server upon detecting modifications made to the source code. For further information on tool installation, visit the official website.
Run the below code
```bash
npx nodemon .
```
This command will start the server. You should see a message in your console confirming that the server is running and the database is connected.
## Start the NextJs frontend application
With the back end running, let's start the front end of your application. Open a new terminal window and navigate to the `mdbsearch` folder. Then, install all the necessary dependencies for this project and initiate the project by running the npm command. Let's also create a `.env` file and update it with the backend url.
```bash
cd ../mdbsearch
npm install
touch .env
```
Create a .env file, and update as below:
```bash
NEXT_PUBLIC_BASE_URL=http://localhost:5050/
```
Start the application by running the below command.
```bash
npm run dev
```
Once the application starts running, you should see the page below running at http://localhost:3000. The running back end is already connected to our front end, hence, during the course of this implementation, we need to make a few modifications to our code.
With this data loading from the MongoDB database, next, let's proceed to implement the search functionality.
## Implementing text search in our application with MongoDB Altas Search
To be able to search through data in our collection, we need to follow the below steps:
### Create a search index
The MongoDB free tier account gives us the room to create, at most, three search indexes.
From the previously created cluster, click on the Browse collections button, navigate to Search, and at the right side of the search page, click on the Create index button. On this screen, click Next to use the visual editor, add an index name (in our case, `search_home`), select the `listingsAndReviews` collection from the `sample_airbnb` database, and click Next.
From this screen, click on Refine Your Index. Here is where we will specify the field in our collection that will be used to generate search results. In our case --- `address` and `property_type` --- `address` field is an object that has a `country` property, which is our target.
Therefore, on this screen, we need to toggle off the Enable Dynamic Mapping option. Under Field Mapping, click the Add Field Mapping button. In the Field Name input, type `address.country`, and in the Data Type, make sure String is selected. Then, scroll to the bottom of the dialog and click the Add button. Create another Field Mapping for `property_type`. Data Type should also be String.
The index configuration should be as shown below.
At this point, scroll to the bottom of the screen and click on Save Changes. Then, click the Create Search Index button. Then wait while MongoDB creates your search index. It usually takes a few seconds to be active. Once active, we can start querying our collection with this index.
You can find detailed information on how to create a search index in the documentation.
## Testing our search index
MongoDB provides a search tester, which can be used to test our search indexes before integrating them into our application. To test our search index, let's click the QUERY button in the search index. This will take us to the Search Tester screen.
Remember, we configure our search index to return results from `address.country` or `property_type`. So, you can test with values like `spain`, `brazil`,
`apartment`, etc. These values will return results, and we can explore each result document to see where the result is found from these fields.
Test with values like `span` and `brasil`. These will return no data result because it's not an exact match. MongoDB understands that scenarios like these are likely to happen. So, Atlas Search has a fuzzy matching feature. With fuzzy matching, the search tool will be searching for not only exact matching keywords but also for matches that might have slight variations, which we will be using in this project. You can find the details on fuzzy search in the documentation.
With the search index created and tested, we can now implement it in our application. But before that, we need to understand what a MongoDB aggregation pipeline is.
## Consume search index in our backend application
Now that we have the search index configured, let's try to integrate it into the API used for this project. Open `backend/index.js` file, find the comment Search endpoint goes here , and update it with the below code.
Start by creating the route needed by our front end.
```javascript
// Search endpoint goes here
app.get("/search/:search", async (req, res) => {
const queries = JSON.parse(req.params.search)
// Aggregation pipeline goes here
});
```
In this endpoint, `/search/:search` we create a two-stage aggregation pipeline: `$search` and `$project`. `$search` uses the index `search_home`, which we created earlier. The `$search` stage structure will be based on the query parameter that will be sent from the front end while the `$project` stage simply returns needed fields from the `$search` result.
This endpoint will receive the `country` and `property_type`, so we can start building the aggregation pipeline. There will always be a category property. We can start by adding this.
```javascript
// Start building the search aggregation stage
let searcher_aggregate = {
"$search": {
"index": 'search_home',
"compound": {
"must":
// get home where queries.category is property_type
{ "text": {
"query": queries.category,
"path": 'property_type',
"fuzzy": {}
}},
// get home where queries.country is address.country
{"text": {
"query": queries.country,
"path": 'address.country',
"fuzzy": {}
}}
]}
}
};
```
We don't necessarily want to send all the fields back to the frontend, so we can use a projection stage to limit the data we send over.
```javascript
app.get("/search/:search", async (req, res) => {
const queries = JSON.parse(req.params.search)
// Start building the search aggregation stage
let searcher_aggregate = { ... };
// A projection will help us return only the required fields
let projection = {
'$project': {
'accommodates': 1,
'price': 1,
'property_type': 1,
'name': 1,
'description': 1,
'host': 1,
'address': 1,
'images': 1,
"review_scores": 1
}
};
});
```
Finally, we can use the `aggregate` method to run this aggregation pipeline, and return the first 50 results to the front end.
```javascript
app.get("/search/:search", async (req, res) => {
const queries = JSON.parse(req.params.search)
// Start building the search aggregation stage
let searcher_aggregate = { ... };
// A projection will help us return only the required fields
let projection = { ... };
// We can now execute the aggregation pipeline, and return the first 50 elements
let results = await itemCollection.aggregate([ searcher_aggregate, projection ]).limit(50).toArray();
res.send(results).status(200);
});
```
The result of the pipeline will be returned when a request is made to `/search/:search`.
At this point, we have an endpoint that can be used to search for homes by their country.
The full source of this endpoint can be located on [Github .
## Implement search feature in our frontend application
From our project folder, open the `mdbsearch/components/Header/index.js` file.Find the `searchNow`function and update it with the below code.
```javascript
//Search function goes here
const searchNow = async (e) => {
setshow(false)
let search_params = JSON.stringify({
country: country,
category: `${activeCategory}`
})
setLoading(true)
await fetch(`${process.env.NEXT_PUBLIC_BASE_URL}search/${search_params}`)
.then((response) => response.json())
.then(async (res) => {
updateCategory(activeCategory, res)
router.query = { country, category: activeCategory };
setcountryValue(country);
router.push(router);
})
.catch((err) => console.log(err))
.finally(() => setLoading(false))
}
```
NextFind the `handleChange`function, let update it with the below code
```javascript
const handleChange = async (e) => {
//Autocomplete function goes here
setCountry(e.target.value);
}
```
With the above update, let's explore our application. Start the application by running `npm run dev` in the terminal. Once the page is loaded, choose a property type, and then click on "search country." At the top search bar, type `brazil`. Finally, click the search button. You should see the result as shown below.
The search result shows data where `address.country` is brazil and `property_type` is apartment. Explore the search with values such as braz, brzl, bral, etc., and we will still get results because of the fuzzy matching feature.
Now, we can say the experience on the website is good. However, we can still make it better by adding an autocomplete feature to the search functionality.
## Add autocomplete to search box
Most modern search engines commonly include an autocomplete dropdown that provides suggestions as you type. Users prefer to quickly find the correct match instead of browsing through an endless list of possibilities. This section will demonstrate how to utilize Atlas Search autocomplete capabilities to implement this feature in our search box.
In our case, we are expecting to see suggestions of countries as we type into the country search input. To implement this, we need to create another search index.
From the previously created cluster, click on the Browse collections button and navigate to Search. At the right side of the search page, click on the Create index button. On this screen, click Next to use the visual editor, add an index name (in our case, `country_autocomplete`), select the listingsAndReviews collection from the sample_airbnb database, and click Next.
From this screen, click on Refine Your Index. We need to toggle off the Enable Dynamic Mapping option.
Under Field Mapping, click the Add Field Mapping button. In the Field Name input, type `address.country`, and in the Data Type, this time, make sure Autocomplete is selected. Then scroll to the bottom of the dialog and click the Add button.
At this point, scroll to the bottom of the screen and Save Changes. Then, click the Create Search Index button. Wait while MongoDB creates your search index --- it usually takes a few seconds to be active.
Once done, we should have two search indexes, as shown below.
## Implement autocomplete API in our backend application
With this done, let's update our backend API as below:
Open the `backend/index.js` file, and update it with the below code:
```javascript
//Country autocomplete endpoint goes here
app.get("/country/autocomplete/:param", async (req, res) => {
let results = await itemCollection.aggregate(
{
'$search': {
'index': 'country_autocomplete',
'autocomplete': {
'query': req.params.param,
'path': 'address.country',
},
'highlight': {
'path': [ 'address.country']
}
}
}, {
'$limit': 1
}, {
'$project': {
'address.country': 1,
'highlights': {
'$meta': 'searchHighlights'
}
}
}
]).toArray();
res.send(results).status(200);
});
```
The above endpoint will return a suggestion of countries as the user types in the search box. In a three-stage aggregation pipeline, the first stage in the pipeline uses the `$search` operator to perform an autocomplete search on the `address.country` field of the documents in the `country_autocomplete` index. The query parameter is set to the user input provided in the URL parameter, and the `highlight` parameter is used to return the matching text with highlighting.
The second stage in the pipeline limits the number of results returned to one.
The third stage in the pipeline uses the `$project` operator to include only the `address.country` field and the search highlights in the output
## Implement autocomplete in our frontend application
Let's also update the front end as below. From our project folder, open the `mdbsearch/components/Header/index.js` file. Find the `handeChange` function, and let's update it with the below code.
```javascript
//Autocomplete function goes here
const handleChange = async (e) => {
setCountry(e.target.value);
if(e.target.value.length > 1){
await fetch(`${process.env.NEXT_PUBLIC_BASE_URL}country/autocomplete/${e.target.value}`)
.then((response) => response.json())
.then(async (res) => {
setsug_countries(res)
})
}
else{
setsug_countries([])
}
}
```
The above function will make a HTTP request to the `country/autocomplete` and save the response in a variable.
With our code updated accordingly, let's explore our application. Everything should be fine now. We should be able to search homes by their country, and we should get suggestions as we type into the search box.
![showing text autocomplete.
Voila! We now have fully functional text search for a home rental website. This will improve the user experience on the website.
## Summary
To have a great user experience on a website, you'll agree with me that it's crucial to make it easy for your users to search for what they are looking for. In this guide, I showed you how I created a text search for a home rental website with MongoDB Atlas Search. This search will give users the ability to search for homes by their country.
MongoDB Atlas Search is a full-text search engine that enables developers to build rich search functionality into their applications, allowing users to search through large volumes of data quickly and easily. Atlas Search also supports a wide range of search options, including fuzzy matching, partial word matching, and wildcard searches. Check out more on MongoDB Atlas Search from the official documentation.
Questions? Comments? Let's continue the conversation! Head over to the MongoDB Developer Community --- we'd love to hear from you. | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "In this tutorial, you will learn how to add the autocomplete feature to a website built with NextJS.",
"contentType": "Tutorial"
} | Adding Autocomplete To Your NextJS Applications With Atlas Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mistral-ai-integration | created | # Revolutionizing AI Interaction: Integrating Mistral AI and MongoDB for a Custom LLM GenAI Application
Large language models (LLMs) are known for their ability to converse with us in an almost human-like manner. Yet, the complexity of their inner workings often remains covered in mystery, sparking intrigue. This intrigue intensifies when we factor in the privacy challenges associated with AI technologies.
In addition to privacy concerns, cost is another significant challenge. Deploying a large language model is crucial for AI applications, and there are two primary options: self-hosted or API-based models. With API-based LLMs, the model is hosted by a service provider, and costs accrue with each API request. In contrast, a self-hosted LLM runs on your own infrastructure, giving you complete control over costs. The bulk of expenses for a self-hosted LLM pertains to the necessary hardware.
Another aspect to consider is the availability of LLM models. With API-based models, during times of high demand, model availability can be compromised. In contrast, managing your own LLM ensures control over availability. You will be able to make sure all your queries to your self-managed LLM can be handled properly and under your control.
Mistral AI, a French startup, has introduced innovative solutions with the Mistral 7B model, Mistral Mixture of Experts, and Mistral Platform, all standing for a spirit of openness. This article explores how Mistral AI, in collaboration with MongoDB, a developer data platform that unifies operational, analytical, and vector search data services, is revolutionizing our interaction with AI. We will delve into the integration of Mistral AI with MongoDB Atlas and discuss its impact on privacy, cost efficiency, and AI accessibility.
## Mistral AI: a game-changer
Mistral AI has emerged as a pivotal player in the open-source AI community, setting new standards in AI innovation. Let's break down what makes Mistral AI so transformative.
### A beacon of openness: Mistral AI's philosophy
Mistral AI's commitment to openness is at the core of its philosophy. This commitment extends beyond just providing open-source code; it's about advocating for transparent and adaptable AI models. By prioritizing transparency, Mistral AI empowers users to truly own and shape the future of AI. This approach is fundamental to ensuring AI remains a positive, accessible force for everyone.
### Unprecedented performance with Mistral 8x7B
Mistral AI has taken a monumental leap forward with the release of Mixtral 8x7B, an innovative sparse mixture of experts model (SMoE) with open weights. An SMoE is a neural network architecture that boosts traditional model efficiency and scalability. It utilizes specialized “expert” sub-networks to handle different input segments. Mixtral incorporates eight of these expert sub-networks.
Licensed under Apache 2.0, Mixtral sets a new benchmark in the AI landscape. Here's a closer look at what makes Mixtral 8x7B a groundbreaking advancement.
### High-performance with sparse architectures
Mixtral 8x7B stands out for its efficient utilization of parameters and high-quality performance. Despite its total parameter count of 46.7 billion, it operates using only 12.9 billion parameters per token. This unique architecture allows Mixtral to maintain the speed and cost efficiency of a 12.9 billion parameter model while offering the capabilities of a much larger model.
### Superior performance, versatility, and cost-performance optimization
Mixtral rivals leading models like Llama 2 70B and GPT-3.5, excelling in handling large contexts, multilingual processing, code generation, and instruction-following. The Mixtral 8x7B model combines cost efficiency with high performance, using a sparse mixture of experts network for optimized resource usage, offering premium outputs at lower costs compared to similar models.
## Mistral “La plateforme”
Mistral AI's beta platform offers developers generative models focusing on simplicity: Mistral-tiny for cost-effective, English-only text generation (7.6 MT-Bench score), Mistral-small for multilingual support including coding (8.3 score), and Mistral-medium for high-quality, multilingual output (8.6 score). These user-friendly, accurately fine-tuned models facilitate efficient AI deployment, as demonstrated in our article using the Mistral-tiny and the platform's embedding model.
## Why MongoDB Atlas as a vector store?
MongoDB Atlas is a unique, fully-managed platform integrating enterprise data, vector search, and analytics, allowing the creation of tailored AI applications. It goes beyond standard vector search with a comprehensive ecosystem, including models like Mistral, setting it apart in terms of unification, scalability, and security.
MongoDB Atlas unifies operational, analytical, and vector search data services to streamline the building of generative AI-enriched apps. From proof-of-concept to production, MongoDB Atlas empowers developers with scalability, security, and performance for their mission-critical production applications.
According to the Retool AI report, MongoDB takes the lead, earning its place as the top-ranked vector database.
- Vector store easily works together with current MongoDB databases, making it a simple addition for groups already using MongoDB for managing their data. This means they can start using vector storage without needing to make big changes to their systems.
- MongoDB Atlas is purpose-built to handle large-scale, operation-critical applications, showcasing its robustness and reliability. This is especially important in applications where it's critical to have accurate and accessible data.
- Data in MongoDB Atlas is stored in JSON format, making it an ideal choice for managing a variety of data types and structures. This is particularly useful for AI applications, where the data type can range from embeddings and text to integers, floating-point values, GeoJSON, and more.
- MongoDB Atlas is designed for enterprise use, featuring top-tier security, the ability to operate across multiple cloud services, and is fully managed. This ensures organizations can trust it for secure, reliable, and efficient operations.
With MongoDB Atlas, organizations can confidently store and retrieve embeddings alongside your existing data, unlocking the full potential of AI for their applications.
## Overview and implementation of your custom LLM GenAI app
Creating a self-hosted LLM GenAI application integrates the power of open-source AI with the robustness of an enterprise-grade vector store like MongoDB. Below is a detailed step-by-step guide to implementing this innovative system:
### 1. Data acquisition and chunk
The first step is gathering data relevant to your application's domain, including text documents, web pages, and importantly, operational data already stored in MongoDB Atlas. Leveraging Atlas's operational data adds a layer of depth, ensuring your AI application is powered by comprehensive, real-time data, which is crucial for contextually enriched AI responses.
Then, we divide the data into smaller, more manageable chunks. This division is crucial for efficient data processing, guaranteeing the AI model interacts with data that is both precise and reflective of your business's operational context.
### 2.1 Generating embeddings
Utilize **Mistral AI embedding endpoint** to transform your segmented text data into embeddings. These embeddings are numerical representations that capture the essence of your text, making it understandable and usable by AI models.
### 2.2 Storing embeddings in MongoDB vector store
Once you have your embeddings, store them in MongoDB’s vector store. MongoDB Atlas, with its advanced search capabilities, allows for the efficient storing and managing of these embeddings, ensuring that they are easily accessible when needed.
### 2.3 Querying your data
Use **MongoDB’s vector search** capability to query your stored data. You only need to create a vector search index on the embedding field in your document. This powerful feature enables you to perform complex searches and retrieve the most relevant pieces of information based on your query parameters.
### 3. & 4. Embedding questions and retrieving similar chunks
When a user poses a question, generate an embedding for this query. Then, using MongoDB’s search functionality, retrieve data chunks that are most similar to this query embedding. This step is crucial for finding the most relevant information to answer the user's question.
### 5. Contextualized prompt creation
Combine the retrieved segments and the original user query to create a comprehensive prompt. This prompt will provide a context to the AI model, ensuring that the responses generated are relevant and accurate.
### 6. & 7. Customized answer generation from Mistral AI
Feed the contextualized prompt into the Mistral AI 7B LLM. The model will then generate a customized answer based on the provided context. This step leverages the advanced capabilities of Mistral AI to provide specific, accurate, and relevant answers to user queries.
## Implementing a custom LLM GenAI app with Mistral AI and MongoDB Atlas
Now that we have a comprehensive understanding of Mistral AI and MongoDB Atlas and the overview of your next custom GenAI app, let’s dive into implementing a custom large language model GenAI app. This app will allow you to have your own personalized AI assistant, powered by the Mistral AI and supported by the efficient data management of MongoDB Atlas.
In this section, we’ll explain the prerequisites and four parts of the code:
- Needed libraries
- Data preparation process
- Question and answer process
- User interface through Gradio
### 0. Prerequisites
As explained above, in this article, we are going to leverage the Mistral AI model through Mistral “La plateforme.” To get access, you should first create an account on Mistral AI. You may need to wait a few hours (or one day) before your account is activated.
Once your account is activated, you can add your subscription. Follow the instructions step by step on the Mistral AI platform.
Once you have set up your subscription, you can then generate your API key for future usage.
Besides using the Mistral AI “La plateforme,” you have another option to implement the Mistral AI model on a machine featuring Nvidia V100, V100S, or A100 GPUs (not an exhaustive list). If you want to deploy a self-hosted large language model on a public or private cloud, you can refer to my previous article on how to deploy Mistral AI within 10 minutes.
### 1. Import needed libraries
This section shows the versions of the required libraries. Personally, I run my code in VScode. So you need to install the following libraries beforehand. Here is the version at the moment I’m running the following code.
```
mistralai
0.0.8
pymongo 4.3.3
gradio 4.10.0
gradio_client 0.7.3
langchain 0.0.348
langchain-core 0.0.12
pandas 2.0.3
```
These include libraries for data processing, web scraping, AI models, and database interactions.
```
import gradio as gr
import os
import pymongo
import pandas as pd
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
```
### 2. Data preparation
The data_prep() function loads data from a PDF, a document, or a specified URL. It extracts text content from a webpage/documentation, removes unwanted elements, and then splits the data into manageable chunks.
Once the data is chunked, we use the Mistral AI embedding endpoint to compute embeddings for every chunk and save them in the document. Afterward, each document is added to a MongoDB collection.
```
def data_prep(file):
# Set up Mistral client
api_key = os.environ"MISTRAL_API_KEY"]
client = MistralClient(api_key=api_key)
# Process the uploaded file
loader = PyPDFLoader(file.name)
pages = loader.load_and_split()
# Split data
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=100,
chunk_overlap=20,
separators=["\n\n", "\n", "(?<=\. )", " ", ""],
length_function=len,
)
docs = text_splitter.split_documents(pages)
# Calculate embeddings and store into MongoDB
text_chunks = [text.page_content for text in docs]
df = pd.DataFrame({'text_chunks': text_chunks})
df['embedding'] = df.text_chunks.apply(lambda x: get_embedding(x, client))
collection = connect_mongodb()
df_dict = df.to_dict(orient='records')
collection.insert_many(df_dict)
return "PDF processed and data stored in MongoDB."
```
### Connecting to MongoDB server
The `connect_mongodb()` function establishes a connection to a MongoDB server. It returns a collection object that can be used to interact with the database. This function will be called in the `data_prep()` function.
In order to get your MongoDB connection string, you can go to your MongoDB Atlas console, click the “Connect” button on your cluster, and choose the Python driver.
![Connect to your cluster
```
def connect_mongodb():
# Your MongoDB connection string
mongo_url = os.environ"MONGO_URI"]
client = pymongo.MongoClient(mongo_url)
db = client["mistralpdf"]
collection = db["pdfRAG"]
return collection
```
You can import your mongo_url by doing the following command in shell.
```
export MONGO_URI="Your_cluster_connection_string"
```
### Getting the embedding
The get_embedding(text) function generates an embedding for a given text. It replaces newline characters and then uses Mistral AI “La plateforme” embedding endpoints to get the embedding. This function will be called in both data preparation and question and answering processes.
```
def get_embedding(text, client):
text = text.replace("\n", " ")
embeddings_batch_response = client.embeddings(
model="mistral-embed",
input=text,
)
return embeddings_batch_response.data[0].embedding
```
### 3. Question and answer function
This function is the core of the program. It processes a user's question and creates a response using the context supplied by Mistral AI.
![Question and answer process
This process involves several key steps. Here’s how it works:
- Firstly, we generate a numerical representation, called an embedding, through a Mistral AI embedding endpoint, for the user’s question.
- Next, we run a vector search in the MongoDB collection to identify the documents similar to the user’s question.
- It then constructs a contextual background by combining chunks of text from these similar documents. We prepare an assistant instruction by combining all this information.
- The user’s question and the assistant’s instruction are prepared into a prompt for the Mistral AI model.
- Finally, Mistral AI will generate responses to the user thanks to the retrieval-augmented generation process.
```
def qna(users_question):
# Set up Mistral client
api_key = os.environ"MISTRAL_API_KEY"]
client = MistralClient(api_key=api_key)
question_embedding = get_embedding(users_question, client)
print("-----Here is user question------")
print(users_question)
documents = find_similar_documents(question_embedding)
print("-----Retrieved documents------")
print(documents)
for doc in documents:
doc['text_chunks'] = doc['text_chunks'].replace('\n', ' ')
for document in documents:
print(str(document) + "\n")
context = " ".join([doc["text_chunks"] for doc in documents])
template = f"""
You are an expert who loves to help people! Given the following context sections, answer the
question using only the given context. If you are unsure and the answer is not
explicitly written in the documentation, say "Sorry, I don't know how to help with that."
Context sections:
{context}
Question:
{users_question}
Answer:
"""
messages = [ChatMessage(role="user", content=template)]
chat_response = client.chat(
model="mistral-tiny",
messages=messages,
)
formatted_documents = '\n'.join([doc['text_chunks'] for doc in documents])
return chat_response.choices[0].message, formatted_documents
```
### The last configuration on the MongoDB vector search index
In order to run a vector search query, you only need to create a vector search index in MongoDB Atlas as follows. (You can also learn more about [how to create a vector search index.)
```
{
"type": "vectorSearch",
"fields":
{
"numDimensions": 1536,
"path": "'embedding'",
"similarity": "cosine",
"type": "vector"
}
]
}
```
### Finding similar documents
The find_similar_documents(embedding) function runs the vector search query in a MongoDB collection. This function will be called when the user asks a question. We will use this function to find similar documents to the questions in the question and answering process.
```
def find_similar_documents(embedding):
collection = connect_mongodb()
documents = list(
collection.aggregate([
{
"$vectorSearch": {
"index": "vector_index",
"path": "embedding",
"queryVector": embedding,
"numCandidates": 20,
"limit": 10
}
},
{"$project": {"_id": 0, "text_chunks": 1}}
]))
return documents
```
### 4. Gradio user interface
In order to have a better user experience, we wrap the PDF upload and chatbot into two tabs using Gradio. Gradio is a Python library that enables the fast creation of customizable web applications for machine learning models and data processing workflows. You can put this code at the end of your Python file. Inside of this function, depending on which tab you are using, either data preparation or question and answering, we will call the explained dataprep() function or qna() function.
```
# Gradio Interface for PDF Upload
pdf_upload_interface = gr.Interface(
fn=data_prep,
inputs=gr.File(label="Upload PDF"),
outputs="text",
allow_flagging="never"
)
# Gradio Interface for Chatbot
chatbot_interface = gr.Interface(
fn=qna,
inputs=gr.Textbox(label="Enter Your Question"),
outputs=[
gr.Textbox(label="Mistral Answer"),
gr.Textbox(label="Retrieved Documents from MongoDB Atlas")
],
allow_flagging="never"
)
# Combine interfaces into tabs
iface = gr.TabbedInterface(
[pdf_upload_interface, chatbot_interface],
["Upload PDF", "Chatbot"]
)
# Launch the Gradio app
iface.launch()
```
![Author’s Gradio UI to upload PDF
## Conclusion
This detailed guide has delved into the dynamic combination of Mistral AI and MongoDB, showcasing how to develop a bespoke large language model GenAI application. Integrating the advanced capabilities of Mistral AI with MongoDB's robust data management features enables the creation of a custom AI assistant that caters to unique requirements.
We have provided a straightforward, step-by-step methodology, covering everything from initial data gathering and segmentation to the generation of embeddings and efficient data querying. This guide serves as a comprehensive blueprint for implementing the system, complemented by practical code examples and instructions for setting up Mistral AI on a GPU-powered machine and linking it with MongoDB.
Leveraging Mistral AI and MongoDB Atlas, users gain access to the expansive possibilities of AI applications, transforming our interaction with technology and unlocking new, secure ways to harness data insights while maintaining privacy.
### Learn more
To learn more about how Atlas helps organizations integrate and operationalize GenAI and LLM data, take a look at our Embedding Generative AI whitepaper to explore RAG in more detail.
If you want to know more about how to deploy a self-hosted Mistral AI with MongoDB, you can refer to my previous articles: Unleashing AI Sovereignty: Getting Mistral.ai 7B Model Up and Running in Less Than 10 Minutes and Starting Today with Mistral AI & MongoDB: A Beginner’s Guide to a Self-Hosted LLM Generative AI Application.
Mixture of Experts Explained
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "This tutorial will go over how to integrate Mistral AI and MongoDB for a custom LLM genAI application.",
"contentType": "Tutorial"
} | Revolutionizing AI Interaction: Integrating Mistral AI and MongoDB for a Custom LLM GenAI Application | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/boosting-ai-build-chatbot-data-mongodb-atlas-vector-search-langchain-templates-using-rag-pattern | created | # Boosting AI: Build Your Chatbot Over Your Data With MongoDB Atlas Vector Search and LangChain Templates Using the RAG Pattern
In this tutorial, I will show you the simplest way to implement an AI chatbot-style application using MongoDB Atlas Vector Search with LangChain Templates and the retrieval-augmented generation (RAG) pattern for more precise chat responses.
## Retrieval-augmented generation (RAG) pattern
The retrieval-augmented generation (RAG) model enhances LLMs by supplementing them with additional, relevant data, ensuring grounded and precise responses for business purposes. Through vector search, RAG identifies and retrieves pertinent documents from databases, which it uses as context sent to the LLM along with the query, thereby improving the LLM's response quality. This approach decreases inaccuracies by anchoring responses in factual content and ensures responses remain relevant with the most current data. RAG optimizes token use without expanding an LLM's token limit, focusing on the most relevant documents to inform the response process.
. This collaboration has produced a retrieval-augmented generation template that capitalizes on the strengths of MongoDB Atlas Vector Search along with OpenAI's technologies. The template offers a developer-friendly approach to crafting and deploying chatbot applications tailored to specific data sets. The LangChain templates serve as a deployable reference framework, accessible as a REST API via LangServe.
The alliance has also been instrumental in showcasing the latest Atlas Vector Search advancements, notably the `$vectorSearch` aggregation stage, now embedded within LangChain's Python and JavaScript offerings. The joint venture is committed to ongoing development, with plans to unveil more templates. These future additions are intended to further accelerate developers' abilities to realise and launch their creative projects.
## LangChain Templates
LangChain Templates present a selection of reference architectures that are designed for quick deployment, available to any user. These templates introduce an innovative system for the crafting, exchanging, refreshing, acquiring, and tailoring of diverse chains and agents. They are crafted in a uniform format for smooth integration with LangServe, enabling the swift deployment of production-ready APIs. Additionally, these templates provide a free sandbox for experimental and developmental purposes.
The `rag-mongo` template is specifically designed to perform retrieval-augmented generation utilizing MongoDB and OpenAI technologies. We will take a closer look at the `rag-mongo` template in the following section of this tutorial.
## Using LangChain RAG templates
To get started, you only need to install the `langchain-cli`.
```
pip3 install -U "langchain-cliserve]"
```
Use the LangChain CLI to bootstrap a LangServe project quickly. The application will be named `my-blog-article`, and the name of the template must also be specified. I’ll name it `rag-mongo`.
```
langchain app new my-blog-article --package rag-mongo
```
This will create a new directory called my-app with two folders:
* `app`: This is where LangServe code will live.
* `packages`: This is where your chains or agents will live.
Now, it is necessary to modify the `my-blog-article/app/server.py` file by adding the [following code:
```
from rag_mongo import chain as rag_mongo_chain
add_routes(app, rag_mongo_chain, path="/rag-mongo")
```
We will need to insert data to MongoDB Atlas. In our exercise, we utilize a publicly accessible PDF document titled "MongoDB Atlas Best Practices" as a data source for constructing a text-searchable vector space. The data will be ingested into the MongoDB `langchain.vectorSearch`namespace.
In order to do it, navigate to the directory `my-blog-article/packages/rag-mongo` and in the file `ingest.py`, change the default names of the MongoDB database and collection. Additionally, modify the URL of the document you wish to use for generating embeddings.
```
cd my-blog-article/packages/rag-mongo
```
My `ingest.py` is located on GitHub. Note that if you change the database and collection name in `ingest.py`, you also need to change it in `rag_mongo`/`chain.py`. My `chain.py` is also located on GitHub. Next, export your OpenAI API Key and MongoDB Atlas URI.
```
export OPENAI_API_KEY="xxxxxxxxxxx"
export MONGO_URI
="mongodb+srv://user:passwd@vectorsearch.abc.mongodb.net/?retryWrites=true"
```
Creating and inserting embeddings into MongoDB Atlas using LangChain templates is very easy. You just need to run the `ingest.py`script. It will first load a document from a specified URL using the PyPDFLoader. Then, it splits the text into manageable chunks using the `RecursiveCharacterTextSplitter`. Finally, the script uses the OpenAI Embeddings API to generate embeddings for each chunk and inserts them into the MongoDB Atlas `langchain.vectorSearch` namespace.
```
python3 ingest.py
```
Now, it's time to initialize Atlas Vector Search. We will do this through the Atlas UI. In the Atlas UI, choose `Search` and then `Create Search`. Afterwards, choose the JSON Editor to declare the index parameters as well as the database and collection where the Atlas Vector Search will be established (`langchain.vectorSearch`). Set index name as `default`. The definition of my index is presented below.
```
{
"type": "vectorSearch",
"fields":
{
"path": "embedding",
"dimensions": 1536,
"similarity": "cosine",
"type": "vector"
}
]
}
```
A detailed procedure is [available on GitHub.
Let's now take a closer look at the central component of the LangChain `rag-mongo` template: the `chain.py` script. This script utilizes the `MongoDBAtlasVectorSearch`
class and is used to create an object — `vectorstore` — that interfaces with MongoDB Atlas's vector search capabilities for semantic similarity searches. The `retriever` is then configured from `vectorstore` to perform these searches, specifying the search type as "similarity."
```
vectorstore = MongoDBAtlasVectorSearch.from_connection_string(
MONGO_URI,
DB_NAME + "." + COLLECTION_NAME,
OpenAIEmbeddings(disallowed_special=()),
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
)
retriever = vectorstore.as_retriever()
```
This configuration ensures the most contextually relevant document is retrieved from the database. Upon retrieval, the script merges this document with a user's query and leverages the `ChatOpenAI` class to process the input through OpenAI's GPT models, crafting a coherent answer. To further enhance this process, the ChatOpenAI class is initialized with the `gpt-3.5-turbo-16k-0613` model, chosen for its optimal performance. The temperature is set to 0, promoting consistent, deterministic outputs for a streamlined and precise user experience.
```
model = ChatOpenAI(model_name="gpt-3.5-turbo-16k-0613",temperature=0)
```
This class permits tailoring API requests, offering control over retry attempts, token limits, and response temperature. It adeptly manages multiple response generations, response caching, and callback operations. Additionally, it facilitates asynchronous tasks to streamline response generation and incorporates metadata and tagging for comprehensive API run tracking.
## LangServe Playground
After successfully creating and storing embeddings in MongoDB Atlas, you can start utilizing the LangServe Playground by executing the `langchain serve` command, which grants you access to your chatbot.
```
langchain serve
INFO: Will watch for changes in these directories:
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process 50552] using StatReload
INFO: Started server process [50557]
INFO: Waiting for application startup.
LANGSERVE: Playground for chain "/rag-mongo" is live at:
LANGSERVE: │
LANGSERVE: └──> /rag-mongo/playground
LANGSERVE:
LANGSERVE: See all available routes at /docs
```
This will start the FastAPI application, with a server running locally at `http://127.0.0.1:8000`. All templates can be viewed at `http://127.0.0.1:8000/docs`, and the playground can be accessed at `http://127.0.0.1:8000/rag-mongo/playground/`.
The chatbot will answer questions about best practices for using MongoDB Atlas with the help of context provided through vector search. Questions on other topics will not be considered by the chatbot.
Go to the following URL:
```
http://127.0.0.1:8000/rag-mongo/playground/
```
And start using your template! You can ask questions related to MongoDB Atlas in the chat.
![LangServe Playground][2]
By expanding the `Intermediate steps` menu, you can trace the entire process of formulating a response to your question. This process encompasses searching for the most pertinent documents related to your query, and forwarding them to the Open AI API to serve as the context for the query. This methodology aligns with the RAG pattern, wherein relevant documents are retrieved to furnish context for generating a well-informed response to a specific inquiry.
We can also use `curl` to interact with `LangServe` REST API and contact endpoints, such as `/rag-mongo/invoke`:
```
curl -X POST "https://127.0.0.1:8000/rag-mongo/invoke" \
-H "Content-Type: application/json" \
-d '{"input": "Does MongoDB support transactions?"}'
```
```
{"output":"Yes, MongoDB supports transactions.","callback_events":[],"metadata":{"run_id":"06c70537-8861-4dd2-abcc-04a85a50bcb6"}}
```
We can also send batch requests to the API using the `/rag-mongo/batch` endpoint, for example:
```
curl -X POST "https://127.0.0.1:8000/rag-mongo/batch" \
-H "Content-Type: application/json" \
-d '{
"inputs": [
"What options do MongoDB Atlas Indexes include?",
"Explain Atlas Global Cluster",
"Does MongoDB Atlas provide backups?"
],
"config": {},
"kwargs": {}
}'
```
```
{"output":["MongoDB Atlas Indexes include the following options:\n- Compound indexes\n- Geospatial indexes\n- Text search indexes\n- Unique indexes\n- Array indexes\n- TTL indexes\n- Sparse indexes\n- Partial indexes\n- Hash indexes\n- Collated indexes for different languages","Atlas Global Cluster is a feature provided by MongoDB Atlas, a cloud-based database service. It allows users to set up global clusters on various cloud platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform. \n\nWith Atlas Global Cluster, users can easily distribute their data across different regions by just a few clicks in the MongoDB Atlas UI. The deployment and management of infrastructure and database resources required for data replication and distribution are taken care of by MongoDB Atlas. \n\nFor example, if a user has an accounts collection that they want to distribute among their three regions of business, Atlas Global Cluster ensures that the data is written to and read from different regions, providing high availability and low latency access to the data.","Yes, MongoDB Atlas provides backups."],"callback_events":[],"metadata":{"run_ids":["1516ba0f-1889-4688-96a6-d7da8ff78d5e","4cca474f-3e84-4a1a-8afa-e24821fb1ec4","15cd3fba-8969-4a97-839d-34a4aa167c8b"]}}
```
For comprehensive documentation and further details, please visit `http://127.0.0.1:8000/docs`.
## Summary
In this article, we've explored the synergy of MongoDB Atlas Vector Search with LangChain Templates and the RAG pattern to significantly improve chatbot response quality. By implementing these tools, developers can ensure their AI chatbots deliver highly accurate and contextually relevant answers. Step into the future of chatbot technology by applying the insights and instructions provided here. Elevate your AI and engage users like never before. Don't just build chatbots — craft intelligent conversational experiences. [Start now with MongoDB Atlas and LangChain!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbe1d8daf4783a8a1/6578c9297cf4a90420f5d76a/Boosting_AI_-_1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt435374678f2a3d2a/6578cb1af2362505ae2f7926/Boosting_AI_-_2.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "Discover how to enhance your AI chatbot's accuracy with MongoDB Atlas Vector Search and LangChain Templates using the RAG pattern in our comprehensive guide. Learn to integrate LangChain's retrieval-augmented generation model with MongoDB for precise, data-driven chat responses. Ideal for developers seeking advanced AI chatbot solutions.",
"contentType": "Tutorial"
} | Boosting AI: Build Your Chatbot Over Your Data With MongoDB Atlas Vector Search and LangChain Templates Using the RAG Pattern | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/cpp/turn-ble | created | # Turn BLE: Implementing BLE Sensors with MCU Devkits
In the first episode of this series, I shared with you the project that I plan to implement. I went through the initial planning and presented a selection of MCU devkit boards that would be suitable for our purposes.
In this episode, I will try and implement BLE communication on one of the boards. Since the idea is to implement
this project as if it were a proof of concept (PoC), once I am moderately successful with one implementation, I will stop there and move forward to the next step, which is implementing the BLE central role in the Raspberry Pi.
Are you ready? Then buckle up for the Bluetooth bumps ahead!
# Table of Contents
1. Concepts
1. Bluetooth classic vs BLE
2. BLE data
3. BLE roles
2. Setup
1. Development environment
2. Testing environment
3. BLE sensor implementation
1. First steps
2. Read from a sensor
3. BLE peripheral GAP
4. Add a sensor service
5. Add notifications
4. Recap
# Concepts
## Bluetooth classic vs BLE
Bluetooth is a technology for wireless communications. Although we talk about Bluetooth as if it were a single thing, Bluetooth Classic and Bluetooth Low Energy are mostly different beasts and also incompatible. Bluetooth Classic has a higher transfer rate (up to 3Mb/s) than Bluetooth Low Energy (up to 2Mb/s), but with great transfer rate comes
great power consumption (as Spidey's uncle used to say).
, a mechanism created by Microsoft and implemented by some boards that emulates a storage device when connected to the USB port. You can then drop a file into that storage device in a special format. The file contains the firmware that you want to install with some metadata and redundancy and, after some basic verifications, it gets flashed to the microcontroller automatically.
In this case, we are going to flash the latest version of MicroPython to the RP2. We press and hold down the BOOTSEL button while we plug the board to the USB, and we drop the latest firmware UF2 file into the USB mass storage device that appears and that is called RPI-RP2. The firmware will be flashed and the board rebooted.
a profile so you can have different extensions for different boards if needed. In this profile, you can also install the recommended Python extensions to help you with the python code.
Let's start by creating a new directory for our project and open VSCode there:
```sh
mkdir BLE-periph-RP2
cd BLE-periph-RP2
code .
```
Then, let's initialize the project so code completion works. From the main menu, select `View` -> `Command Palette` (or Command + Shift + P) and find `MicroPico: Configure Project`. This command will add a file to the project and various buttons to the bottom left of your editor that will allow you to upload the files to the board, execute them, and reset it, among other things.
You can find all of the code that is explained in the repository. Feel free to make pull requests where you see they fit or ask questions.
## Testing environment
Since we are only going to develop the BLE peripheral, we will need some existing tool to act as the BLE central. There are several free mobile apps available that will do that. I am going to use "nRF Connect for Mobile" (Android or iOS), but there are others that can help too, like LightBlue (macOS/iOS or Android).
# BLE sensor implementation
## First steps
1. MicroPython loads and executes code stored in two files, called `boot.py` and `main.py`, in that order. The first one is used to configure some board features, like network or peripherals, just once and only after (re)starting the board. It must only contain that to avoid booting problems. The `main.py` file gets loaded and executed by MicroPython right after `boot.py`, if it exists, and that contains the application code. Unless explicitly configured, `main.py` runs in a loop, but it can be stopped more easily. In our case, we don't need any prior configuration, so let's start with a `main.py` file.
2. Let's start by blinking the builtin LED. So the first thing that we are going to need is a module that allows us to work with the different capabilities of the board. That module is named `machine` and we import it, just to have access to the pins:
```python
from machine import Pin
```
3. We then get an instance of the pin that is connected to the LED that we'll use to output voltage, switching it on or off:
```python
led = Pin('LED', Pin.OUT)
```
4. We create an infinite loop and turn on and off the LED with the methods of that name, or better yet, with the `toggle()` method.
```python
while True:
led.toggle()
```
5. This is going to switch the led on and off so fast that we won't be able to see it, so let's introduce a delay, importing the `time` module:
```python
import time
while True:
time.sleep_ms(500)
```
6. Run the code using the `Run` button at the left bottom of VSCode and see the LED blinking. Yay!
## Read from a sensor
Our devices are going to be measuring the noise level from a microphone and sending it to the collecting station. However, our Raspberry Pi Pico doesn't have a microphone builtin, so we are going to start by using the temperature sensor that the RP2 has to get some measurements.
1. First, we import the analog-to-digital-converting capabilities:
```python
from machine import ADC
```
2. The onboard sensor is on the fifth (index 4) ADC channel, so we get a variable pointing to it:
```python
adc = ADC(4)
```
3. In the main loop, read the voltage. It is a 16-bit unsigned integer, in the range 0V to 3.3V, that converts into degrees Celsius according to the specs of the sensor. Print the value:
```python
temperature = 27.0 - ((adc.read_u16() * 3.3 / 65535) - 0.706) / 0.001721
print("T: {}ºC".format(temperature))
```
4. We run this new version of the code and the measurements should be updated every half a second.
## BLE peripheral GAP
We are going to start by advertising the device name and its characteristics. That is done with the Generic Access Profile (GAP) for the peripheral role. We could use the low level interface to Bluetooth provided by the `bluetooth` module or the higher level interface provided by `aioble`. The latter is simpler and recommended in the MicroPython manual, but the documentation is a little bit lacking. We are going to start with this one and read its source code when in doubt.
1. We will start by importing the `aioble` and `bluetooth`, i.e. the low level bluetooth (used here only for the UUIDs):
```python
import aioble
import bluetooth
```
2. All devices must be able to identify themselves via the Device Information Service, identified with the UUID 0x180A. We start by creating this service:
```python
# Constants for the device information service
_SVC_DEVICE_INFO = bluetooth.UUID(0x180A)
svc_dev_info = aioble.Service(_SVC_DEVICE_INFO)
```
3. Then, we are going to add some read-only characteristics to that service, with initial values that won't change:
```python
_CHAR_MANUFACTURER_NAME_STR = bluetooth.UUID(0x2A29)
_CHAR_MODEL_NUMBER_STR = bluetooth.UUID(0x2A24)
_CHAR_SERIAL_NUMBER_STR = bluetooth.UUID(0x2A25)
_CHAR_FIRMWARE_REV_STR = bluetooth.UUID(0x2A26)
_CHAR_HARDWARE_REV_STR = bluetooth.UUID(0x2A27)
aioble.Characteristic(svc_dev_info, _CHAR_MANUFACTURER_NAME_STR, read=True, initial='Jorge')
aioble.Characteristic(svc_dev_info, _CHAR_MODEL_NUMBER_STR, read=True, initial='J-0001')
aioble.Characteristic(svc_dev_info, _CHAR_SERIAL_NUMBER_STR, read=True, initial='J-0001-0000')
aioble.Characteristic(svc_dev_info, _CHAR_FIRMWARE_REV_STR, read=True, initial='0.0.1')
aioble.Characteristic(svc_dev_info, _CHAR_HARDWARE_REV_STR, read=True, initial='0.0.1')
```
4. Now that the service is created with the relevant characteristics, we register it:
```python
aioble.register_services(svc_dev_info)
```
5. We can now create an asynchronous task that will take care of handling the connections. By definition, our peripheral can only be connected to one central device. We enable the Generic Access Protocol (GAP), a.k.a General Access service, by starting to advertise the registered services and thus, we accept connections. We could disallow connections (`connect=False`) for connection-less devices, such as beacons. Device name and appearance are mandatory characteristics of GAP, so they are parameters of the `advertise()` method.
```python
from micropython import const
_ADVERTISING_INTERVAL_US = const(200_000)
_APPEARANCE = const(0x0552) # Multi-sensor
async def task_peripheral():
""" Task to handle advertising and connections """
while True:
async with await aioble.advertise(
_ADVERTISING_INTERVAL_US,
name='RP2-SENSOR',
appearance=_APPEARANCE,
services=_DEVICE_INFO_SVC]
) as connection:
print("Connected from ", connection.device)
await connection.disconnected() # NOT connection.disconnect()
print("Disconnect")
```
6. It would be useful to know when this peripheral is connected so we can do what is needed. We create a global boolean variable and expose it to be changed in the task for the peripheral:
```python
connected=False
async def task_peripheral():
""" Task to handle advertising and connections """
global connected
while True:
connected = False
async with await aioble.advertise(
_ADVERTISING_INTERVAL_MS,
appearance=_APPEARANCE,
name='RP2-SENSOR',
services=[_SVC_DEVICE_INFO]
) as connection:
print("Connected from ", connection.device)
connected = True
```
7. We can provide visual feedback about the connection status in another task:
```python
async def task_flash_led():
""" Blink the on-board LED, faster if disconnected and slower if connected """
BLINK_DELAY_MS_FAST = const(100)
BLINK_DELAY_MS_SLOW = const(500)
while True:
led.toggle()
if connected:
await asyncio.sleep_ms(BLINK_DELAY_MS_SLOW)
else:
await asyncio.sleep_ms(BLINK_DELAY_MS_FAST)
```
8. Next, we import [`asyncio` to use it with the async/await mechanism:
```python
import uasyncio as asyncio
```
9. And move the sensor read into another task:
```python
async def task_sensor():
""" Task to handle sensor measures """
while True:
temperature = 27.0 - ((adc.read_u16() * 3.3 / 65535) - 0.706) / 0.001721
print("T: {}°C".format(temperature))
time.sleep_ms(_TEMP_MEASUREMENT_INTERVAL_MS)
```
10. We define a constant for the interval between temperature measurements:
```python
_TEMP_MEASUREMENT_INTERVAL_MS = const(15_000)
```
11. And replace the delay with an asynchronous compatible implementation:
```python
await asyncio.sleep_ms(_TEMP_MEASUREMENT_FREQUENCY)
```
12. We delete the import of the `time` module that we won't be needing anymore.
13. Finally, we create a main function where all the tasks are instantiated:
```python
async def main():
""" Create all the tasks """
tasks =
asyncio.create_task(task_peripheral()),
asyncio.create_task(task_flash_led()),
asyncio.create_task(task_sensor()),
]
asyncio.gather(*tasks)
```
14. And launch main when the program starts:
```python
asyncio.run(main())
```
15. Wash, rinse, repeat. I mean, run it and try to connect to the device using one of the applications mentioned above. You should be able to find and read the hard-coded characteristics.
## Add a sensor service
1. We define a new service, like what we did with the *device info* one. In this case, it is an Environmental Sensing Service (ESS) that exposes one or more characteristics for different types of environmental measurements.
```python
# Constants for the Environmental Sensing Service
_SVC_ENVIRONM_SENSING = bluetooth.UUID(0x181A)
svc_env_sensing = aioble.Service(_SVC_ENVIRONM_SENSING)
```
2. We also define a characteristic for… yes, you guessed it, a temperature measurement:
```python
_CHAR_TEMP_MEASUREMENT = bluetooth.UUID(0x2A1C)
temperature_char = aioble.Characteristic(svc_env_sensing, _CHAR_TEMP_MEASUREMENT, read=True)
```
3. We then add the service to the one that we registered:
```python
aioble.register_services(svc_dev_info, svc_env_sensing)
```
4. And also to the services that get advertised:
```python
services=[_SVC_DEVICE_INFO, _SVC_ENVIRONM_SENSING]
```
5. The format in which the data must be written is specified in the "[GATT Specification Supplement" document. My advice is that before you select the characteristic that you are going to use, you check the data that is going to be contained there. For this characteristic, we need to encode the temperature encoded as a IEEE 11073-20601 memfloat32 :cool: :
```python
def _encode_ieee11073(value, precision=2):
""" Binary representation of float value as IEEE-11073:20601 32-bit FLOAT """
return int(value * (10 ** precision)).to_bytes(3, 'little', True) + struct.pack('
## Add notifications
The "GATT Specification Supplement" document states that notifications should be implemented adding a "Client Characteristic Configuration" descriptor, where they get enabled and initiated. Once the notifications are enabled, they should obey the trigger conditions set in the "ES Trigger Setting" descriptor. If two or three (max allowed) trigger descriptors are defined for the same characteristic, then the "ES Configuration" descriptor must be present too to define if the triggers should be combined with OR or AND. Also, to change the values of these descriptors, client binding --i.e. persistent pairing-- is required.
This is a lot of work for a proof of concept, so we are going to simplify it by notifying every time the sensor is read. Let me make myself clear, this is **not** the way it should be done. We are cutting corners here, but my understanding at this point in the project is that we can postpone this part of the implementation because it does not affect the viability of our device. We add a to-do to remind us later that we will need to do this, if we decide to go with Bluetooth sensors over MQTT.
1. We change the characteristic declaration to enable notifications:
```python
temperature_char = aioble.Characteristic(svc_env_sensing, _CHAR_TEMP_MEASUREMENT, read=True, notify=True)
```
2. We add a descriptor, although we are going to ignore it for now:
```python
_DESC_ES_TRIGGER_SETTING = bluetooth.UUID(0x290D)
aioble.Descriptor(temperature_char, _DESC_ES_TRIGGER_SETTING, write=True, initial=struct.pack("
# Recap
In this article, I have covered some relevant Bluetooth Low Energy concepts and put them in practice by using them in writing the firmware of a Raspberry Pi Pico board. In this firmware, I used the on-board LED, read from the on-board temperature sensor, and implemented a BLE peripheral that offered two services and a characteristic that depended on measured data and could push notifications.
We haven't connected a microphone to the board or read noise levels using it yet. I have decided to postpone this until we have decided which mechanism will be used to send the data from the sensors to the collecting stations: BLE or MQTT. If, for any reason, I have to switch boards while implementing the next steps, this time investment would be lost. So, it seems reasonable to move this part to later in our development effort.
In my next article, I will guide you through how we need to interact with Bluetooth from the command line and how Bluetooth can be used for our software using DBus. The goal is to understand what we need to do in order to move from theory to practice using C++ later.
If you have questions or feedback, join me in the MongoDB Developer Community!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt26289a1e0bd71397/6565d3e3ca38f02d5bd3045f/bluetooth.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte87a248b6f6e9663/6565da9004116d59842a0c77/RP2-bootsel.JPG | md | {
"tags": [
"C++",
"Python"
],
"pageDescription": "After having sketched the plan in our first article, this is the first one where we start coding. In this hands-on article, you will understand how to write firmware for a Raspberry Pi Pico (RP2) board try that implements offering sensor data through Bluetooth Low Energy communication.",
"contentType": "Tutorial"
} | Turn BLE: Implementing BLE Sensors with MCU Devkits | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/multicloud-clusters-with-andrew-davidson | created | # MongoDB Atlas Multicloud Clusters
In this episode of the podcast, Nic and I are joined by Andrew Davidson,
VP of Cloud Product at MongoDB. Andrew shares some details of the latest
innovation in MongoDB Atlas and talks about some of the ways multi-cloud
clusters can help developers.
:youtube]{vid=GWKa_VJNv7I}
Michael Lynn (00:00): Welcome to the podcast. On this episode, Nick and
I sit down with Andrew Davidson, VP of cloud product here at MongoDB.
We're talking today about the latest innovation built right into MongoDB
Atlas, our database-as-a-service multi-cloud. So this gives you the
ability to deploy and manage your instances of MongoDB in the cloud
across the three major cloud providers: AWS, Azure, and GCP. Andrew
tells us all about this innovation and how it could be used and some of
the benefits. So stay tuned. I hope you enjoyed the episode.
Michael Lynn (00:52): Andrew Davidson, VP of cloud product with MongoDB.
How are you, sir?
Andrew Davidson (00:57): Good to see you, Mike. I'm doing very well.
Thank you. It's been a busy couple of weeks and I'm super excited to be
here to talk to you about what we've been doing.
Michael Lynn (01:05): Absolutely. We're going to talk about multi-cloud
today and innovation added to MongoDB Atlas. But before we get there,
Andrew, I wonder if you would just explain or just introduce yourself to
the audience. Who are you and what do you do?
Andrew Davidson (01:19): Sure. Yeah. Yeah. So as Mike introed me
earlier, I'm VP of cloud products here at MongoDB, which basically means
that I focus on our cloud business and what we're bringing to market for
our customers and also thinking about how those services for our
customers evolve over time and the roadmap around them and how we
explain them to the world as well and how our users use them and over
time, grow on them in deep partnership with us. So I've been around
MongoDB for quite some time, for eight years. In that time, have really
sort of seen this huge shift that everyone involved at MongoDB has been
part of with our DNA shifting from being more of a software company, to
being a true cloud company. It's been a really, a five-year journey over
the last five years. To me, this announcement we made last week that
Mike was just alluding to is really the culmination in many ways of that
journey. So couldn't be more excited.
Michael Lynn (02:12): Yeah, fantastic. Eight years. Eight years at a
software company is a lifetime. You were at Google prior to this. What
did you do at Google?
Andrew Davidson (02:23): I was involved in a special team. They're
called Ground Truth. It was remapping the world and it was all about
building a new map dataset using Google's unique street view and other
inputs to basically make all of the maps that you utilize every day on
Google maps better and for Google to be able to evolve that dataset
faster. So it was a very human project that involved thousands of human
operators doing an enormous amount of complex work because the bottom
line was, this is not something that you could do with ML at that point
anyway. I'm sure they've evolved a little bit since then. It's been a
long time.
Michael Lynn (02:59): Fantastic. So in your eight years, what other
things have you done at MongoDB?
Andrew Davidson (03:05): So I really started out focusing on our
traditional, on-prem management software, something called MongoDB ops
manager, which was kind of the core differentiated in our enterprise
advanced offering. At that time, the company was more focused on
essentially, monetizing getting off the ground, through traditional IT
operations. Even though we were always about developers and developers
were always building great new applications on the database, in a way,
we had sort of moved our focus from a monetization perspective towards a
more ops centered view, and I was a big part of that. But I was able to
make that shift and kind of recenter, recenter on the developer when we
kind of moved into a true cloud platform and that's been a lot of fun
ever since.
Michael Lynn (03:52): Yeah. Amazing journey. So from ops manager to
Atlas. I want to be cognizant that not all of our listeners will be
familiar with Atlas. So maybe give a description of what Atlas is from
your perspective.
Andrew Davidson (04:08): Totally. Yeah. So MongoDB Atlas as a global
cloud database service. It's available on the big three cloud providers,
AWS, Google Cloud, and Azure. And it's truly elastic and declarative,
meaning you can describe a database cluster in any part of the world, in
any region, 79 regions across the three providers and Atlas does all the
heavy lifting to get you there, to do the lifecycle management. You can
do infrastructure as code, you can manage your database clusters in
Terraform, or you can use our beautiful user interface to learn and
deploy. We realized it's not enough to have an elastic database service.
That's the starting point. It's also not enough to have the best modern
database, one that's so native to developers, one that speaks to that
rich data model of MongoDB with the secondary indexes and all the rest.
Really, we needed to go beyond the database.
Andrew Davidson (04:54): So we focused heavily on helping our customers
with prescriptive guidance, schema advice, index suggestions, and you'll
see us keep evolving there because we recognize that really every week,
tens of thousands of people are coming onto the platform for the first
time. We need to just lower the barrier to entry to build successful
applications on the database. We've also augmented Atlas with key
platform expansions by including search. We have Lucene-based search
indexes now native to Atlas. So you don't have to ETL that data to a
search engine and basically, build search right into your operational
applications. We've got online archive for data tiering into object
storage economics. With MongoDB Realm, we now have synchronization all
the way back to the Realm mobile database and data access services all
native to the platform. So it's all very exciting, but fundamentally
what has been missing until just last week was true multi-cloud
clusters, the ability to mix and match those databases across the clouds
to have replicas that span the cloud providers or to seamlessly move
from one provider to the other with no downtime, no change in connection
string. So that's really exciting.
Nic Raboy (06:02): Hey, Andrew, I have a question for you. This is a
question that I received quite a bit. So when setting up your Atlas
cluster, you're of course asked to choose between Amazon, Google, and
Microsoft for your hosting. Can you maybe talk about how that's
different or what that's really for in comparison to the multi-cloud
that we're talking about today?
Andrew Davidson (06:25): Yeah, sure. Look, being intellectually honest,
most customers of ours, most developers, most members of the community
have a preferred cloud platform and all of the cloud platforms are great
in their own ways. I think they shine in so many ways. There's lots of
reasons why folks will start on Google, or start on Azure, or start at
AWS. Usually, there's that preferred provider. So most users will deploy
an Atlas cluster into their target provider where their other
infrastructure lives, where their application tier lives, et cetera.
That's where the world is today for the most part. We know though that
we're kind of at the bleeding edge of a new change that's happening in
this market where over time, people are going to start more and more,
mixing and take advantage of the best of the different cloud providers.
So I think those expectations are starting to shift and over time,
you'll see us probably boost the prominence of the multi-cloud option as
the market kind of moves there as well.
Michael Lynn (07:21): So this is available today and what other
requirements are there if I want to deploy an instance of MongoDB and
leverage multi-cloud?
Andrew Davidson (07:30): Yeah, that's a great question. Fundamentally,
in order to use the multi-cloud database cluster, I think it kind of
depends on what your use case is, what you're trying to achieve. But
generally speaking, database in isolation on a cloud provider isn't
enough. You need to use something that's connecting to and using that
database. So broadly speaking, you're going to want to have an
application tier that's able to connect the database and if you're
across multiple clouds and you're doing that for various reasons, like
for example, high availability resiliency to be able to withstand the
adage of a full cloud provider, well then you would want your app tier
to also be multi-cloud.
Andrew Davidson (08:03): That's the kind of thing that traditionally,
folks have not thought was easy, but it's getting easier all the time.
That's why it kind of... We're opening this up at the data tier, and
then others, the Kubernetes platform, et cetera, are really opening up
that portability at the app tier and really making this possible for the
market. But before we sort of keep focusing on kind of where we are
today, I think it wouldn't hurt to sort of rewind a little bit and talk
about why multi-cloud is so difficult.
Michael Lynn (08:32): That makes sense.
Andrew Davidson (08:35): There's broadly been two main reasons why
multi-cloud is so hard. They kind of boil down to data and how much data
gravity there is. Of course, that's what our announcement is about
changing. In other words, your data has to be stored in one cloud or
another, or traditionally had to be. So actually moving that data to
another cloud or making it present or available in the other cloud, that
was enormously difficult and traditionally, made it so that people just
felt multi-cloud was essentially not achievable. The second key reason
multi-cloud has traditionally been very difficult is that there hasn't
been essentially, a community created or company backed sort of way of
standardizing operations around a multi-cloud posture.
Andrew Davidson (09:21): In other words, you had to go so deep in your
AWS environment, or your Google environment, your Azure environment, to
manage all that infrastructure to be completely comfortable with the
governance and life cycle management, that the idea of going and
learning to go do that again in another cloud platform was just
overwhelming. Who wants to do that? What's starting to change that
though, is that there's sort of best in class software vendors, as well
as SaaS offerings that are starting to basically, essentially build
consistency around the clouds and really are best in breed for doing so.
So when you look at what maybe Datadog is doing for monitoring or what
Hashi Corp is doing with Terraform and vault, infrastructure is code and
secrets management, all the other exciting announcements they're always
making, these dynamics are all kind of contributing to making it
possible for customers to actually start truly doing this. Then we're
coming in now with true multi-cloud data tier. So it's highly
complimentary with those other offerings. I think over the next couple
of years, this is going to start becoming very popular.
Michael Lynn (10:26): Sort of the next phase in the evolution of cloud
computing?
Andrew Davidson (10:29): Totally, totally.
Michael Lynn (10:30): I thought it might be good if we could take a look
at it. I know that some of the folks listening to this will be just
that, just listening to it. So we'll try and talk our way through it as
well. But let's give folks a peek at what this thing looks like. So I'm
going to share my screen here.
Andrew Davidson (10:48): Cool. Yeah. While you're pulling that up-
\[crosstalk 00:10:50\] Go ahead, Nic. Sorry.
Nic Raboy (10:51): I was going to ask, and then maybe this is something
that Mike is going to show when he brings up his screen-
Andrew Davidson (10:55): Yeah.
Nic Raboy (10:56): ... but from a user perspective, how much involvement
does the multi-cloud wire? Is it something that just happens behind the
scenes and I don't have to worry a thing about it, or is there going to
be some configurations that we're going to see?
Andrew Davidson (11:11): Yeah. It's pretty straightforward. It's a very
intuitive user interface for setting it up and then boom, your cluster's
multi-cloud, which Mike will show, but going back to the question
before, in order to take... Depending on what use case you've got for
multi-cloud, and I would say there's about maybe four kinds of use cases
and happy to go through them, depending on the use case, I think there's
a different set of things you're going to need to worry about for how to
use this from the perspective of your applications.
Michael Lynn (11:36): Okay. So for the folks listening in, I've opened
my web browser and I'm visiting cloud.MongoDB.com. I provided my
credentials and I'm logged into my Atlas console. So I'm on the first
tab, which is Atlas, and I'm looking at the list of clusters that I've
previously deployed. I've got a free tier cluster and some additional
project-based clusters. Let's say I want to deploy a new instance of
MongoDB, and I want to make use of multi-cloud. The first thing I'm
going to do is click the "Create New Cluster" button, and that's going
to bring up the deployment wizard. Here's where you make all the
decisions about what you want that cluster to look like. Andrew, feel
free to add color as I go through this.
Andrew Davidson (12:15): Totally.
Michael Lynn (12:16): So the first question is a global cluster
configuration. Just for this demo, I'm going to leave that closed. We'll
leave that for another day. The second panel is cloud provider and
region, and here's where it gets interesting. Now, Andrew, at the
beginning when you described what Atlas is, you mentioned that Atlas is
available on the top three cloud providers. So we've got AWS, Google
Cloud, and Azure, but really, doesn't it exist above the provider?
Andrew Davidson (12:46): In many ways, it does. You're right. Look,
thinking about kind of the history of how we got here, Atlas was
launched maybe near... about four and a half years ago in AWS and then
maybe three and a half years ago on Google Cloud and Azure. Ever since
that moment, we've just been deepening what Atlas is on all three
providers. So we've gotten to the point where we can really sort of
think about the database experience in a way that really abstracts away
the complexity of those providers and all of those years of investment
in each of them respectively, is what has enabled us to sort of unify
them together today in a way that frankly, would just be a real
challenge for someone to try and do on their own.
Andrew Davidson (13:28): The last thing you want to be trying to set up
is a distributed database service across multiple clouds. We've got some
customers who've tried to do it and it's giant undertaking. We've got
large engineering teams working on this problem full time and boom, here
it is. So now, you can take advantage of it. We do it once, everyone
else can use it a thousand times. That's the beauty of it.
Michael Lynn (13:47): Beautiful. Fantastic. I was reading the update on
the release schedule changes for MongoDB, the core server product, and I
was just absolutely blown away with the amount of hours that goes into a
major release, just incredible amount of hours and then on top of that,
the ability that you get with Atlas to deploy that in multiple cloud's
pretty incredible.
Nic Raboy (14:09): Let me interject here for a second. We've got a
question coming in from the chat. So off the band is asking, "Will Atlas
support DigitalOcean or OVH or Ali Cloud?"
Andrew Davidson (14:19): Great questions. We don't have current plans to
do so, but I'll tell you. Everything about our roadmap is about customer
demand and what we're hearing from you. So hearing that from you right
now helps us think about it.
Michael Lynn (14:31): Great. Love the questions. Keep them coming. So
back to the screen. We've got our create new cluster wizard up and I'm
in the second panel choosing the cloud provider and region. What I
notice, something new I haven't seen before, is there's a call-out box
that is labeled, "multi-cloud multi-region workload isolation." So this
is the key to multi-cloud. Am I right?
Andrew Davidson (14:54): That's right.
Michael Lynn (14:54): So if I toggle that radio button over to on, I see
some additional options available to me and here is where I'm going to
specify the electable nodes in a cluster. So we have three possible
configurations. We've got the electable nodes for high availability. We
have the ability or the option to add read-only nodes, and we can
specify the provider and region. We've got an option to add analytics
nodes. Let's just focus on the electable nodes for the moment. By
default, AWS is selected. I think that's because I selected AWS as the
provider, but if I click "Add a Provider/Region," I now have the ability
to change the provider to let's say, GCP, and then I can select a
region. Of course, the regions are displaying Google's data center list.
So I can choose something that's near the application. I'm in
Philadelphia, so North Virginia is probably the closest. So now, we have
a multi-cloud, multi-provider deployment. Any other notes or things you
want to call out, Andrew?
Andrew Davidson (16:01): Yeah- \[crosstalk 00:16:02\]
Nic Raboy (16:01): Actually, Mike, real quick.
Michael Lynn (16:03): Yeah.
Nic Raboy (16:04): I missed it. When you added GCP, did you select two
or did it pre-populate with that? I'm wondering what's the thought
process behind how it calculated each of those node numbers.
Andrew Davidson (16:15): It's keeping them on automatically. For
electrical motors, you have to have an odd number. That's based on-
\[crosstalk 00:16:20\]
Nic Raboy (16:20): Got it.
Andrew Davidson (16:20): ... we're going to be using a raft-like
consensus protocol, which allows us to maintain read and write
availability continuously as long as majority quorum is online. So if
you add a third one, if you add Azure, for example, for fun, why not?
What that means is we're now spread across three cloud providers and
you're going to have to make an odd number... You're going to have to
either make it 111 or 221, et cetera. What this means is you can now
withstand a global outage of any of the three cloud providers and still
have your application be continuously available for both reads and
writes because the other two cloud providers will continue to be online
and that's where you'll receive your majority quorum from.
Andrew Davidson (17:03): So I think what we've just demonstrated here is
kind of one of the four sort of dominant use cases for multi-cloud,
which is high availability resilience. It's kind of a pretty intuitive
one. In practice, a lot of people would want to use this in the context
of countries that have fewer cloud regions. In the US, we're a bit
spoiled. There's a bunch of AWS regions, bunch of Azure regions, a bunch
of Google Cloud regions. But if you're a UK based, France based, Canada
based, et cetera, your preferred cloud provider might have just one
region that country. So being able to expand into other regions from
another cloud provider, but keep data in your country for data
sovereignty requirements can be quite compelling.
Michael Lynn (17:46): So I would never want to deploy a single node in
each of the cloud providers, right? We still want a highly available
cluster deployed in each of the individual cloud providers. Correct?
Andrew Davidson (17:57): You can do 111. The downside with 111 is that
during maintenance rounds, you would essentially have rights that would
move to the second region on your priority list. That's broadly
reasonable actually, if you're using majority rights from a right
concern perspective. It kind of depends on what you want to optimize
for. One other thing I want to quickly show, Mike, is that there's
little dotted lines on the left side or triple bars on the left side.
You can actually drag and drop your preferred regional order with that.
That basically is choosing which region by default will take rights if
that region's online.
Michael Lynn (18:35): So is zone deployment with the primary, in this
case, I've moved Azure to the top, that'll take the highest priority and
that will be my primary right receiver.
Andrew Davidson (18:47): Exactly. That would be where the primaries are.
If Azure were to be down or Azure Virginia were to be down, then what
would have initially been a secondary in USC's one on AWS would be
elected primary and that's where rights would start going.
Michael Lynn (19:03): Got you. Yeah.
Andrew Davidson (19:04): Yeah.
Michael Lynn (19:05): So you mentioned majority rights. Can you explain
what that is for anyone who might be new to that concept?
Andrew Davidson (19:12): Yeah, so MongoDB has a concept of a right
concern and basically our best practice is to configure your rights,
which is a MongoDB client side driver configuration to utilize the right
concern majority, which essentially says the driver will not acknowledge
the right from the perspective of the database and move on to the next
operation until the majority of the nodes in the replica set have
acknowledged that right. What that kind of guarantees you is that you're
not allowing your rights to sort of essentially, get past what your
replica set can keep up with. So in a world in which you have really
bursty momentary rights, you might consider a right concern of one, just
make sure it goes to the primary, but that can have some risks at scale.
So we recommend majority.
Michael Lynn (20:01): So in the list of use cases, you mentioned the
first and probably the most popular, which was to provide additional
access and availability in a region where there's only one provider data
center. Let's talk about some of the other reasons why would someone
want to deploy multi-cloud,
Andrew Davidson (20:19): Great question. The second, which actually
think may even be more popular, although you might tell me, "It's not
exactly as multi-cloudy as what we just talked about," but what I think
is going to be the most popular is being able to move from one cloud
provider to the other with no downtime. In other words, you're only
multi-cloud during the transition, then you're on the other cloud. So
it's kind of debatable, but having that freedom, that flexibility, and
basically the way this one would be configured, Mike, is if you were to
click "Cancel" here and just go back to the single cloud provider view,
in a world in which you have a cluster deployed on AWS just like you
have now, if this was a deployed cluster, you could just go to the top,
select Azure or GCP, click "Deploy," and we would just move you there.
That's also possible now.
Andrew Davidson (21:07): The reason I think this will be the most
commonly used is there's lots of reasons why folks need to be able to
move from one cloud provider to the other. Sometimes you have sort of an
organization that's been acquired into another organization and there's
a consolidation effort underway. Sometimes there's just a feeling that
another cloud provider has key capabilities that you want to start
taking advantage of more, so you want to make the change. Other times,
it's about really feeling more future-proof and just being able to not
be locked in and make that change. So this one, I think, is more of a
sort of boardroom level concern, as well as a developer empowerment
thing. It's really exciting to have at your fingertips, the power to
feel like I can just move my data around to anywhere in the world across
79 regions and nothing's holding me back from doing that. When you sit
at your workstation, that's really exciting.
Michael Lynn (22:00): Back to that comment you made earlier, really
reducing that data gravity-
Andrew Davidson (22:05): Totally.
Michael Lynn (22:05): ... and increasing fungibility. Yeah, go ahead,
Nic.
Nic Raboy (22:09): Yeah. So you mentioned being able to move things
around. So let me ask the same scenario, same thing, but when Mike was
able to change the priority of each of those clouds, can we change the
priority after deployment? Say Amazon is our priority right now for the
next year, but then after that, Google is our now top priority. Can we
change that after the fact?
Andrew Davidson (22:34): Absolutely. Very great point. In general with
Atlas, traditionally, the philosophy was always that basically
everything in this cluster builder that Mike's been showing should be
the kind of thing that you could configure when you first deploying
declaratively, and that you could then change and Atlas will just do the
heavy lifting to get you to that new declarative state. However, up
until last week, the only major exception to that was you couldn't
change your cloud provider. You could already change the region inside
the cloud provider, change your multi-region configs, et cetera. But
now, you can truly change between cloud providers, change the order of
priority for a multi-region environment that involves multiple cloud
providers. All of those things can easily be changed.
Andrew Davidson (23:15): When you make those changes, these are all no
downtime operations. We make that possible by doing everything in a
rolling manner on the backend and taking advantage of MongoDB's, in what
we were talking about earlier, the distributed system, the consensus
that allows us to ensure that we always have majority quorum online, and
it would just do all that heavy lifting to get you from any state to any
other state in a wall preserving that majority. It's really kind of a
beautiful thing.
Michael Lynn (23:39): It is. And so powerful. So what we're showing here
is the deployer, like you said, but all this same screen comes up when I
take a look at a previously deployed instance of MongoDB and I can make
changes right in that same way.
Andrew Davidson (23:55): Exactly.
Michael Lynn (23:55): Very powerful.
Andrew Davidson (23:56): Exactly.
Michael Lynn (23:56): Yeah.
Andrew Davidson (23:57): So there's a few other use cases I think we
should just quickly talk about because we've gone through two sort of
future-proof mobility moving from one to the other. We talked about high
availability resilience and how that's particularly useful in countries
where you might want to keep data in country and you might not have as
many cloud provider regions in that country. But the third use case
that's pretty exciting is, and I think empowering more for developers,
is sometimes you want to take advantage of the best capabilities of the
different cloud providers. You might love AWS because you just love
serverless and you love Lambda, and who doesn't? So you want to be there
for that aspect of your application.
Andrew Davidson (24:34): Maybe you also want to be able to take
advantage of some of the capabilities that Google offers around machine
learning and AI, and maybe you want to be able to have the ML jobs on
the Google side be able to access your data with low latency in that
cloud provider region. Well, now you can have a read replica in that
Google cloud region and do that right there. Maybe you want to take
advantage of Azure dev ops, just love the developer centricity that
we're seeing from Microsoft and Azure these days, and again, being able
to kind of mix and match and take advantage of the cloud provider you
want unlocks possibilities and functional capabilities that developers
just haven't really had at their fingertips before. So that's pretty
exciting too.
Michael Lynn (25:18): Great. So any other use cases that we want to
mention?
Andrew Davidson (25:23): Yeah. The final one is kind of a little bit of
a special category. It's more about saying that sometimes... So many of
our own customers and people listening are themselves, building software
services and cloud services on top of MongoDB Atlas. For people doing
that, you'll likely be aware that sometimes your end customers will
stipulate which underlying cloud provider you need to use for them. It's
a little frustrating when they do that. It's kind of like, "Oh my, I
have to go use a different cloud provider to service you." You can duke
it out with them and maybe make it happen without doing that. But now,
you have the ability to just easily service your end customers without
that getting in the way. If they have a rule that a certain cloud
provider has to be used, you can just service them too. So we power so
many layers of the infrastructure stack, so many SaaS services and
platforms, so many of them, this is very compelling.
Michael Lynn (26:29): So if I've got my data in AWS, they have a VPC, I
can establish a VPC between the application and the database?
Andrew Davidson (26:36): Correct.
Michael Lynn (26:37): And the same with Google and Azure.
Andrew Davidson (26:39): Yeah. There's an important note. MongoDB Atlas
offers VPC peering, as well as private link on AWS and Azure. We offer
VPC peering on Google as well. In the context of our multi-cloud
clusters that we've just announced, we don't yet have support for
private link and VPC peering. You're going to use public IP access list
management. That will be coming, along with global cluster support,
those will be coming in early 2021 as our current forward-looking
statement. Obviously, everything forward looking... There's uncertainty
that you want me to disclaimer in there, but what we've launched today
is really first and foremost, for accessless management. However, when
you move one cluster from one cloud to the other, you can absolutely
take advantage of peering today or privately.
Nic Raboy (27:30): Because Mike has it up on his screen, am I able to
remove nodes from a cloud region on demand, at will?
Andrew Davidson (27:37): Absolutely. You can just add more replicas.
Just as we were saying, you can move from one to the other or sort of
change your preferred order of where the rights go, you can add more
replicas in any cloud at any time or remove them at any time \[crosstalk
00:27:53\] ... of Atlas vertical auto scaling too.
Nic Raboy (27:55): That was what I was going to ask. So how does that
work? How would you tell it, if it's going to auto-scale, could you tell
it to auto-scale? How does it balance between three different clouds?
Andrew Davidson (28:07): That's a great question. The way Atlas
auto-scaling works is you really... So if you choose an M30, you can see
the auto-scaling in there.
Nic Raboy (28:20): For people who are listening, this is all in the
create a new cluster screen.
Andrew Davidson (28:25): Basically, the way it works is we will
vertically scale you. If any of the nodes in the cluster are
essentially, getting to the point where they require scaling based on
underlying compute requirements, the important thing to note is that
it's a common misconception, I guess you could say, on MongoDB that you
might want to sort of scale only certain replicas and not others. In
general, you would want to scale them all symmetrically. The reason for
that is that the workload needs to be consistent across all the nodes
and the replica sets. That's because even though the rights go to the
primary, the secondaries have to keep up with those rights too. Anyway.
Michael Lynn (29:12): I just wanted to show that auto-scale question
here.
Andrew Davidson (29:16): Oh, yes.
Michael Lynn (29:17): Yeah, there we go. So if I'm deploying an M30, I
get to specify at a minimum, I want to go down to an M20 and at a
maximum, based on the read-write profile and the activity application, I
want to go to a maximum of an M50, for example.
Andrew Davidson (29:33): Exactly.
Nic Raboy (29:35): But maybe I'm missing something or maybe it's not
even important based on how things are designed. Mike is showing how to
scale up and down from M20 to M50, but what if I wanted all of the new
nodes to only appear on my third priority tier? Is that a thing?
Andrew Davidson (29:55): Yeah, that's a form of auto-scaling that's
definitely... In other words, you're basically saying... Essentially,
what you're getting at is what if I wanted to scale my read throughput
by adding more read replicas?
Nic Raboy (30:04): Sure.
Andrew Davidson (30:05): It's generally speaking, not the way we
recommend scaling. We tend to recommend vertical scaling as opposed to
adding read replicas. \[crosstalk 00:30:14\]
Nic Raboy (30:14): Got it.
Andrew Davidson (30:14): The reason for that with MongoDB is that if you
scale reads with replicas, the risk is that you could find yourself in a
compounding failure situation where you're overwhelming all your
replicas somehow, and then one goes down and then all of a sudden, you
have the same workload going to an even smaller pool. So we tend to
vertically scale and/or introduce sharding once you're talking about
that kind of level of scale. However, there's scenarios, in which to
your point, you kind of want to have read replicas in other regions,
let's say for essentially,. servicing traffic from that region at low
latency and those kinds of use cases. That's where I think you're right.
Over time, we'll probably see more exotic forms of auto-scaling we'll
want to introduce. It's not there today.
Michael Lynn (31:00): Okay. So going back and we'll just finish out our
create a new cluster. Create a new cluster, I'll select multi-cloud and
I'll select electable nodes into three providers.
Andrew Davidson (31:15): So analytics on Azure- \[crosstalk 00:31:18\]
That's fine. That's totally fine.
Michael Lynn (31:20): Okay.
Andrew Davidson (31:21): Not a problem.
Michael Lynn (31:22): Okay. So a single cluster across AWS, GCP, and
Azure, and we've got odd nodes. Okay. Looking good there. We'll select
our cluster tier. Let's say an M30 is fine and we'll specify the amount
of disk. Okay. So anything else that we want to bring into the
discussion? Any other features that we're missing?
Andrew Davidson (31:47): Not that I can think of. I'll say we've
definitely had some interesting early adoption so far. I'm not going to
name names, but we've seen folks, both take advantage of moving between
the cloud providers, we've seen some folks who have spread their
clusters across multiple cloud providers in a target country like I
mentioned, being able to keep my data in Canada, but across multiple
cloud providers. We've seen use cases in e-commerce. We've seen use
cases in healthcare. We've seen use cases in basically monitoring. We've
seen emergency services use cases. So it's just great early validation
to have this out in the market and to have so much enthusiasm for the
customers. So if anyone is keen to try this out, it's available to try
on MongoDB Atlas today.
Nic Raboy (32:33): So this was a pretty good episode. Actually, we have
a question coming. Let's address this one first. Just curious that M
stands for multi-tiered? Where did this naming convention derive from?
Andrew Davidson (32:48): That's a great question. The cluster tiers in
Atlas from the very beginning, we use this nomenclature of the M10, the
M20, the M30. The not-so-creative answer is that it stands for MongoDB,
\[crosstalk 00:33:00\] but it's a good point that now we can start
claiming that it has to do with multi-cloud, potentially. I like that.
Michael Lynn (33:08): Can you talk anything about the roadmap? Is there
anything that you can share about what's coming down the pike?
Andrew Davidson (33:13): Look, we're just going to keep going bigger,
faster, more customers, more scale. It's just so exciting. We're now
powering on Atlas some of the biggest games in the world, some of the
most popular consumer financial applications, applications that make
consumers' lives work, applications that enable manufacturers to
continue building all the things that we rely on, applications that
power for a truly global audience. We're seeing incredible adoption and
growth and developing economies. It's just such an exciting time and
being on the front edge of seeing developers really just transforming
the economy, the digital transformation that's happening.
Andrew Davidson (33:57): We're just going to continue it, focus on where
our customers want us to go to unlock more value for them, keep going
broader on the data platform. I think I mentioned that search is a big
focus for us, augmenting the traditional operational transactional
database, realm, the mobile database community, and essentially making
it possible to build those great mobile applications and have them
synchronize back up to the cloud mothership. I'm super excited about
that and the global run-up to the rollout of 5g. I think the possibility
in mobile are just going to be incredible to watch in the coming year.
Yeah, there's just a lot. There's going to be a lot happening and we're
all going to be part of it together.
Michael Lynn (34:34): Sounds awesome.
Nic Raboy (34:34): If people wanted to get in contact with you after
this episode airs, you on Twitter, LinkedIn? Where would you prefer
people to reach out?
Andrew Davidson (34:43): I would just recommend people email directly:
. Love to hear product feedback, how we
can improve. That's what we're here for is to hear it from you directly,
connect you with the right people, et cetera.
Michael Lynn (34:56): Fantastic. Well, Andrew, thanks so much for taking
time out of your busy day. This has been a great conversation. Really
enjoyed learning more about multi-cloud and I look forward to having you
on the podcast again.
Andrew Davidson (35:08): Thanks so much. Have a great rest of your day,
everybody.
## Summary
With multi-cloud clusters on MongoDB Atlas, customers can realize the
benefits of a multi-cloud strategy with true data portability and a
simplified management experience. Developers no longer have to deal with
manual data replication, and businesses can focus their technical
resources on building differentiated software.
## Related Links
Check out the following resources for more information:
[Introducing Multi-Cloud
Clusters
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn about multi-cloud clusters with Andrew Davidson",
"contentType": "Podcast"
} | MongoDB Atlas Multicloud Clusters | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/amazon-sagemaker-and-mongodb-vector-search-part-3 | created | # Part #3: Semantically Search Your Data With MongoDB Atlas Vector Search
This final part of the series will show you how to use the Amazon SageMaker endpoint created in the previous part and perform a semantic search on your data using MongoDB Atlas Vector Search. The two parts shown in this tutorial will be:
- Creating and updating embeddings/vectors for your data.
- Creating vectors for a search query and sending them via Atlas Vector Search.
## Creating a MongoDB cluster and loading the sample data
If you haven’t done so, create a new cluster in your MongoDB Atlas account. Make sure to check `Add sample dataset` to get the sample data we will be working with right away into your cluster.
before continuing.
## Preparing embeddings
Are you ready for the final part?
Let’s have a look at the code (here, in Python)!
You can find the full repository on GitHub.
In the following section, we will look at the three relevant files that show you how you can implement a server app that uses the Amazon SageMaker endpoint.
## Accessing the endpoint: sagemaker.py
The `sagemaker.py` module is the wrapper around the Lambda/Gateway endpoint that we created in the previous example.
Make sure to create a `.env` file with the URL saved in `EMBDDING_SERVICE`.
It should look like this:
```
MONGODB_CONNECTION_STRING="mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority"
EMBEDDING_SERVICE="https://.amazonaws.com/TEST/sageMakerResource"
```
The following function will then attach the query that we want to search for to the URL and execute it.
```
import os
from typing import Optional
from urllib.parse import quote
import requests
from dotenv import load_dotenv
load_dotenv()
EMBEDDING_SERVICE = os.environ.get("EMBEDDING_SERVICE")
```
As a result, we expect to find the vector in a JSON field called `embedding`.
```
def create_embedding(plot: str) -> Optionalfloat]:
encoded_plot = quote(plot)
embedding_url = f"{EMBEDDING_SERVICE}?query={encoded_plot}"
embedding_response = requests.get(embedding_url)
embedding_vector = embedding_response.json()["embedding"]
return embedding_vector
```
## Access and searching the data: atlas.py
The module `atlas.py` is the wrapper around everything MongoDB Atlas.
Similar to `sagemaker.py`, we first grab the `MONGODB_CONNECTION_STRING` that you can retrieve in [Atlas by clicking on `Connect` in your cluster. It’s the authenticated URL to your cluster. We need to save MONGODB_CONNECTION_STRING to our .env file too.
We then go ahead and define a bunch of variables that we’ve set in earlier parts, like `VectorSearchIndex` and `embedding`, along with the automatically created `sample_mflix` demo data.
Using the Atlas driver for Python (called PyMongo), we then create a `MongoClient` which holds the connection to the Atlas cluster.
```
import os
from dotenv import load_dotenv
from pymongo import MongoClient, UpdateOne
from sagemaker import create_embedding
load_dotenv()
MONGODB_CONNECTION_STRING = os.environ.get("MONGODB_CONNECTION_STRING")
DATABASE_NAME = "sample_mflix"
COLLECTION_NAME = "embedded_movies"
VECTOR_SEARCH_INDEX_NAME = "VectorSearchIndex"
EMBEDDING_PATH = "embedding"
mongo_client = MongoClient(MONGODB_CONNECTION_STRING)
database = mongo_clientDATABASE_NAME]
movies_collection = database[COLLECTION_NAME]
```
The first step will be to actually prepare the already existing data with embeddings.
This is the sole purpose of the `add_missing_embeddings` function.
We’ll create a filter for the documents with missing embeddings and retrieve those from the database, only showing their plot, which is the only field we’re interested in for now.
Assuming we will only find a couple every time, we can then go through them and call the `create_embedding` endpoint for each, creating an embedding for the plot of the movie.
We’ll then add those new embeddings to the `movies_to_update` array so that we eventually only need one `bulk_write` to the database, which makes the call more efficient.
Note that for huge datasets with many embeddings to create, you might want to adjust the lambda function to take an array of queries instead of just a single query. For this simple example, it will do.
```
def add_missing_embeddings():
movies_with_a_plot_without_embedding_filter = {
"$and": [
{"plot": {"$exists": True, "$ne": ""}},
{"embedding": {"$exists": False}},
]
}
only_show_plot_projection = {"plot": 1}
movies = movies_collection.find(
movies_with_a_plot_without_embedding_filter,
only_show_plot_projection,
)
movies_to_update = []
for movie in movies:
embedding = create_embedding(movie["plot"])
update_operation = UpdateOne(
{"_id": movie["_id"]},
{"$set": {"embedding": embedding}},
)
movies_to_update.append(update_operation)
if movies_to_update:
result = movies_collection.bulk_write(movies_to_update)
print(f"Updated {result.modified_count} movies")
else:
print("No movies to update")
```
Now that the data is prepared, we add two more functions that we need to offer a nice REST service for our client application.
First, we want to be able to update the plot, which inherently means we need to update the embeddings again.
The `update_plot` is similar to the initial `add_missing_embeddings` function but a bit simpler since we only need to update one document.
```
def update_plot(title: str, plot: str) -> dict:
embedding = create_embedding(plot)
result = movies_collection.find_one_and_update(
{"title": title},
{"$set": {"plot": plot, "embedding": embedding}},
return_document=True,
)
return result
```
The other function we need to offer is the actual vector search. This can be done using the [MongoDB Atlas aggregation pipeline that can be accessed via the Atlas driver.
The `$vectorSearch` stage needs to include the index name we want to use, the path to the embedding, and the information about how many results we want to get. This time, we only want to retrieve the title, so we add a `$project` stage to the pipeline. Make sure to use `list` to turn the cursor that the search returns into a python list.
```
def execute_vector_search(vector: float]) -> list[dict]:
vector_search_query = {
"$vectorSearch": {
"index": VECTOR_SEARCH_INDEX_NAME,
"path": EMBEDDING_PATH,
"queryVector": vector,
"numCandidates": 10,
"limit": 5,
}
}
projection = {"$project": {"_id": 0, "title": 1}}
results = movies_collection.aggregate([vector_search_query, projection])
results_list = list(results)
return results_list
```
## Putting it all together: main.py
Now, we can put it all together. Let’s use Flask to expose a REST service for our client application.
```
from flask import Flask, request, jsonify
from atlas import execute_vector_search, update_plot
from sagemaker import create_embedding
app = Flask(__name__)
```
One route we want to expose is `/movies/` that can be executed with a `PUT` operation to update the plot of a movie given the title. The title will be a query parameter while the plot is passed in via the body. This function is using the `update_plot` that we created before in `atlas.py` and returns the movie with its new plot on success.
```
@app.route("/movies/<title>", methods=["PUT"])
def update_movie(title: str):
try:
request_json = request.get_json()
plot = request_json["plot"]
updated_movie = update_plot(title, plot)
if updated_movie:
return jsonify(
{
"message": "Movie updated successfully",
"updated_movie": updated_movie,
}
)
else:
return jsonify({"error": f"Movie with title {title} not found"}), 404
except Exception as e:
return jsonify({"error": str(e)}), 500
```
The other endpoint, finally, is the vector search: `/movies/search`.
A `query` is `POST`’ed to this endpoint which will then use `create_embedding` first to create a vector from this query. Note that we need to also create vectors for the query because that’s what the vector search needs to compare it to the actual data (or rather, its embeddings).
We then call `execute_vector_search` with this `embedding` to retrieve the results, which will be returned on success.
```
@app.route("/movies/search", methods=["POST"])
def search_movies():
try:
request_json = request.get_json()
query = request_json["query"]
embedding = create_embedding(query)
results = execute_vector_search(embedding)
jsonified_results = jsonify(
{
"message": "Movies searched successfully",
"results": results,
}
)
return jsonified_results
except Exception as e:
return jsonify({"error": str(e)}), 500
if __name__ == "__main__":
app.run(debug=True)
```
And that’s about all you have to do. Easy, wasn’t it?
Go ahead and run the Flask app (main.py) and when ready, send a cURL to see Atlas Vector Search in action. Here is an example when running it locally:
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "A movie about the Earth, Mars and an invasion."}' http://127.0.0.1:5000/movies/search
```
This should lead to the following result:
```
{
"message": "Movies searched successfully",
"results": [
{
"title": "The War of the Worlds"
},
{
"title": "The 6th Day"
},
{
"title": "Pixels"
},
{
"title": "Journey to Saturn"
},
{
"title": "Moonraker"
}
]
}
```
War of the Worlds — a movie about Earth, Mars, and an invasion. And what a great one, right?
## That’s a wrap!
Of course, this is just a quick and short overview of how to use Amazon SageMaker to create vectors and then search via Vector Search.
We do have a full workshop for you to learn about all those parts in detail. Please visit the [Search Lab GitHub page to learn more.
✅ Sign-up for a free cluster.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
✅ Get help on our Community Forums.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39d5ab8bbebc44c9/65cc9cbbdccfc66fb1aafbcc/image31.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdf806894dc3b136b/65cc9cc023dbefeab0ffeefd/image27.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5fc901d90dfa6ff0/65cc9cc31a7344b317bc5e49/image11.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1b5a88f35287c71c/65cc9cd50167d02ac58f99e2/image23.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8fd38c8f35fdca2c/65cc9ccbdccfc666d2aafbd0/image8.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt198377737b0b106e/65cc9cd5fce01c5c5efc8603/image26.png | md | {
"tags": [
"Atlas",
"Python",
"AI",
"AWS",
"Serverless"
],
"pageDescription": "In this series, we look at how to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.",
"contentType": "Tutorial"
} | Part #3: Semantically Search Your Data With MongoDB Atlas Vector Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/local-development-mongodb-atlas-cli-docker | created | # Local Development with the MongoDB Atlas CLI and Docker
Need a consistent development and deployment experience as developers work across teams and use different machines for their daily tasks? That is where Docker has you covered with containers. A common experience might include running a local version of MongoDB Community in a container and an application in another container. This strategy works for some organizations, but what if you want to leverage all the benefits that come with MongoDB Atlas in addition to a container strategy for your application development?
In this tutorial we'll see how to create a MongoDB-compatible web application, bundle it into a container with Docker, and manage creation as well as destruction for MongoDB Atlas with the Atlas CLI during container deployment.
It should be noted that this tutorial was intended for a development or staging setting through your local computer. It is not advised to use all the techniques found in this tutorial in a production setting. Use your best judgment when it comes to the code included.
If you’d like to try the results of this tutorial, check out the repository and instructions on GitHub.
## The prerequisites
There are a lot of moving parts in this tutorial, so you'll need a few things prior to be successful:
- A MongoDB Atlas account
- Docker
- Some familiarity with Node.js and JavaScript
The Atlas CLI can create an Atlas account for you along with any keys and ids, but for the scope of this tutorial you'll need one created along with quick access to the "Public API Key", "Private API Key", "Organization ID", and "Project ID" within your account. You can see how to do this in the documentation.
Docker is going to be the true star of this tutorial. You don't need anything beyond Docker because the Node.js application and the Atlas CLI will be managed by the Docker container, not your host computer.
On your host computer, create a project directory. The name isn't important, but for this tutorial we'll use **mongodbexample** as the project directory.
## Create a simple Node.js application with Express Framework and MongoDB
We're going to start by creating a Node.js application that communicates with MongoDB using the Node.js driver for MongoDB. The application will be simple in terms of functionality. It will connect to MongoDB, create a database and collection, insert a document, and expose an API endpoint to show the document with an HTTP request.
Within the project directory, create a new **app** directory for the Node.js application to live. Within the **app** directory, using a command line, execute the following:
```bash
npm init -y
npm install express mongodb
```
If you don't have Node.js installed, just create a **package.json** file within the **app** directory with the following contents:
```json
{
"name": "mongodbexample",
"version": "1.0.0",
"description": "",
"main": "main.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node main.js"
},
"keywords": ],
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.18.2",
"mongodb": "^4.12.1"
}
}
```
Next, we'll need to define our application logic. Within the **app** directory we need to create a **main.js** file. Within the **main.js** file, add the following JavaScript code:
```javascript
const { MongoClient } = require("mongodb");
const Express = require("express");
const app = Express();
const mongoClient = new MongoClient(process.env.MONGODB_ATLAS_URI);
let database, collection;
app.get("/data", async (request, response) => {
try {
const results = await collection.find({}).limit(5).toArray();
response.send(results);
} catch (error) {
response.status(500).send({ "message": error.message });
}
});
const server = app.listen(3000, async () => {
try {
await mongoClient.connect();
database = mongoClient.db(process.env.MONGODB_DATABASE);
collection = database.collection(`${process.env.MONGODB_COLLECTION}`);
collection.insertOne({ "firstname": "Nic", "lastname": "Raboy" });
console.log("Listening at :3000");
} catch (error) {
console.error(error);
}
});
process.on("SIGTERM", async () => {
if(process.env.CLEANUP_ONDESTROY == "true") {
await database.dropDatabase();
}
mongoClient.close();
server.close(() => {
console.log("NODE APPLICATION TERMINATED!");
});
});
```
There's a lot happening in the few lines of code above. We're going to break it down!
Before we break down the pieces, take note of the environment variables used throughout the JavaScript code. We'll be passing these values through Docker in the end so we have a more dynamic experience with our local development.
The first important snippet of code to focus on is the start of our application service:
```javascript
const server = app.listen(3000, async () => {
try {
await mongoClient.connect();
database = mongoClient.db(process.env.MONGODB_DATABASE);
collection = database.collection(`${process.env.MONGODB_COLLECTION}`);
collection.insertOne({ "firstname": "Nic", "lastname": "Raboy" });
console.log("Listening at :3000");
} catch (error) {
console.error(error);
}
});
```
Using the client that was configured near the top of the file, we can connect to MongoDB. Once connected, we can get a reference to a database and collection. This database and collection doesn't need to exist before that because it will be created automatically when data is inserted. With the reference to a collection, we insert a document and begin listening for API requests through HTTP.
This brings us to our one and only endpoint:
```javascript
app.get("/data", async (request, response) => {
try {
const results = await collection.find({}).limit(5).toArray();
response.send(results);
} catch (error) {
response.status(500).send({ "message": error.message });
}
});
```
When the `/data` endpoint is consumed, the first five documents in our collection are returned to the user. Otherwise if there was some issue, an error message would be returned.
This brings us to something optional, but potentially valuable when it comes to a Docker deployment for local development:
```javascript
process.on("SIGTERM", async () => {
if(process.env.CLEANUP_ONDESTROY == "true") {
await database.dropDatabase();
}
mongoClient.close();
server.close(() => {
console.log("NODE APPLICATION TERMINATED!");
});
});
```
The above code says that when a termination event is sent to the application, drop the database we had created and close the connection to MongoDB as well as the Express Framework service. This could be useful if we want to undo everything we had created when the container stops. If you want your changes to persist, it might not be necessary. For example, if you want your data to exist between container deployments, persistence would be required. On the other hand, maybe you are using the container as part of a test pipeline and you want to clean up when you’re done, the termination commands could be valuable.
So we have an environment variable heavy Node.js application. What's next?
## Deploying a MongoDB Atlas cluster with network rules, user roles, and sample data
While we have the application, our MongoDB Atlas cluster may not be available to us. For example, maybe this is our first time being exposed to Atlas and nothing has been created yet. We need to be able to quickly and easily create a cluster, configure our IP access rules, specify users and permissions, and then connect with our Node.js application.
This is where the MongoDB Atlas CLI does the heavy lifting!
There are many different ways to create a script. Some like Bash, some like ZSH, some like something else. We're going to be using ZX which is a JavaScript wrapper for Bash.
Within your project directory, not your **app** directory, create a **docker_run_script.mjs** file with the following code:
```javascript
#!/usr/bin/env zx
$.verbose = true;
const runtimeTimestamp = Date.now();
process.env.MONGODB_CLUSTER_NAME = process.env.MONGODB_CLUSTER_NAME || "examples";
process.env.MONGODB_USERNAME = process.env.MONGODB_USERNAME || "demo";
process.env.MONGODB_PASSWORD = process.env.MONGODB_PASSWORD || "password1234";
process.env.MONGODB_DATABASE = process.env.MONGODB_DATABASE || "business_" + runtimeTimestamp;
process.env.MONGODB_COLLECTION = process.env.MONGODB_COLLECTION || "people_" + runtimeTimestamp;
process.env.CLEANUP_ONDESTROY = process.env.CLEANUP_ONDESTROY || false;
var app;
process.on("SIGTERM", () => {
app.kill("SIGTERM");
});
try {
let createClusterResult = await $`atlas clusters create ${process.env.MONGODB_CLUSTER_NAME} --tier M0 --provider AWS --region US_EAST_1 --output json`;
await $`atlas clusters watch ${process.env.MONGODB_CLUSTER_NAME}`
let loadSampleDataResult = await $`atlas clusters loadSampleData ${process.env.MONGODB_CLUSTER_NAME} --output json`;
} catch (error) {
console.log(error.stdout);
}
try {
let createAccessListResult = await $`atlas accessLists create --currentIp --output json`;
let createDatabaseUserResult = await $`atlas dbusers create --role readWriteAnyDatabase,dbAdminAnyDatabase --username ${process.env.MONGODB_USERNAME} --password ${process.env.MONGODB_PASSWORD} --output json`;
await $`sleep 10`
} catch (error) {
console.log(error.stdout);
}
try {
let connectionString = await $`atlas clusters connectionStrings describe ${process.env.MONGODB_CLUSTER_NAME} --output json`;
let parsedConnectionString = new URL(JSON.parse(connectionString.stdout).standardSrv);
parsedConnectionString.username = encodeURIComponent(process.env.MONGODB_USERNAME);
parsedConnectionString.password = encodeURIComponent(process.env.MONGODB_PASSWORD);
parsedConnectionString.search = "retryWrites=true&w=majority";
process.env.MONGODB_ATLAS_URI = parsedConnectionString.toString();
app = $`node main.js`;
} catch (error) {
console.log(error.stdout);
}
```
Once again, we're going to break down what's happening!
Like with the Node.js application, the ZX script will be using a lot of environment variables. In the end, these variables will be passed with Docker, but you can hard-code them at any time if you want to test things outside of Docker.
The first important thing to note is the defaulting of environment variables:
```javascript
process.env.MONGODB_CLUSTER_NAME = process.env.MONGODB_CLUSTER_NAME || "examples";
process.env.MONGODB_USERNAME = process.env.MONGODB_USERNAME || "demo";
process.env.MONGODB_PASSWORD = process.env.MONGODB_PASSWORD || "password1234";
process.env.MONGODB_DATABASE = process.env.MONGODB_DATABASE || "business_" + runtimeTimestamp;
process.env.MONGODB_COLLECTION = process.env.MONGODB_COLLECTION || "people_" + runtimeTimestamp;
process.env.CLEANUP_ONDESTROY = process.env.CLEANUP_ONDESTROY || false;
```
The above snippet isn't a requirement, but if you want to avoid setting or passing around variables, defaulting them could be helpful. In the above example, the use of `runtimeTimestamp` will allow us to create a unique database and collection should we want to. This could be useful if numerous developers plan to use the same Docker images to deploy containers because then each developer would be in a sandboxed area. If the developer chooses to undo the deployment, only their unique database and collection would be dropped.
Next we have the following:
```javascript
process.on("SIGTERM", () => {
app.kill("SIGTERM");
});
```
We have something similar in the Node.js application as well. We have it in the script because eventually the script controls the application. So when we (or Docker) stops the script, the same stop event is passed to the application. If we didn't do this, the application would not have a graceful shutdown and the drop logic wouldn't be applied.
Now we have three try / catch blocks, each focusing on something particular.
The first block is responsible for creating a cluster with sample data:
```javascript
try {
let createClusterResult = await $`atlas clusters create ${process.env.MONGODB_CLUSTER_NAME} --tier M0 --provider AWS --region US_EAST_1 --output json`;
await $`atlas clusters watch ${process.env.MONGODB_CLUSTER_NAME}`
let loadSampleDataResult = await $`atlas clusters loadSampleData ${process.env.MONGODB_CLUSTER_NAME} --output json`;
} catch (error) {
console.log(error.stdout);
}
```
If the cluster already exists, an error will be caught. We have three blocks because in our scenario, it is alright if certain parts already exist.
Next we worry about users and access:
```javascript
try {
let createAccessListResult = await $`atlas accessLists create --currentIp --output json`;
let createDatabaseUserResult = await $`atlas dbusers create --role readWriteAnyDatabase,dbAdminAnyDatabase --username ${process.env.MONGODB_USERNAME} --password ${process.env.MONGODB_PASSWORD} --output json`;
await $`sleep 10`
} catch (error) {
console.log(error.stdout);
}
```
We want our local IP address to be added to the access list and we want a user to be created. In this example, we are creating a user with extensive access, but you may want to refine the level of permission they have in your own project. For example, maybe the container deployment is meant to be a sandboxed experience. In this scenario, it makes sense that the user created access only the database and collection in the sandbox. We `sleep` after these commands because they are not instant and we want to make sure everything is ready before we try to connect.
Finally we try to connect:
```javascript
try {
let connectionString = await $`atlas clusters connectionStrings describe ${process.env.MONGODB_CLUSTER_NAME} --output json`;
let parsedConnectionString = new URL(JSON.parse(connectionString.stdout).standardSrv);
parsedConnectionString.username = encodeURIComponent(process.env.MONGODB_USERNAME);
parsedConnectionString.password = encodeURIComponent(process.env.MONGODB_PASSWORD);
parsedConnectionString.search = "retryWrites=true&w=majority";
process.env.MONGODB_ATLAS_URI = parsedConnectionString.toString();
app = $`node main.js`;
} catch (error) {
console.log(error.stdout);
}
```
After the first try / catch block finishes, we'll have a connection string. We can finalize our connection string with a Node.js URL object by including the username and password, then we can run our Node.js application. Remember, the environment variables and any manipulations we made to them in our script will be passed into the Node.js application.
## Transition the MongoDB Atlas workflow to containers with Docker and Docker Compose
At this point, we have an application and we have a script for preparing MongoDB Atlas and launching the application. It's time to get everything into a Docker image to be deployed as a container.
At the root of your project directory, add a **Dockerfile** file with the following:
```dockerfile
FROM node:18
WORKDIR /usr/src/app
COPY ./app/* ./
COPY ./docker_run_script.mjs ./
RUN curl https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz --output mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz
RUN tar -xvf mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz && mv mongodb-atlas-cli_1.3.0_linux_x86_64 atlas_cli
RUN chmod +x atlas_cli/bin/atlas
RUN mv atlas_cli/bin/atlas /usr/bin/
RUN npm install -g zx
RUN npm install
EXPOSE 3000
CMD ["./docker_run_script.mjs"]
```
The custom Docker image will be based on a Node.js image which will allow us to run our Node.js application as well as our ZX script.
After our files are copied into the image, we run a few commands to download and extract the MongoDB Atlas CLI.
Finally, we install ZX and our application dependencies and run the ZX script. The `CMD` command for running the script is done when the container is run. Everything else is done when the image is built.
We could build our image from this **Dockerfile** file, but it is a lot easier to manage when there is a Compose configuration. Within the project directory, create a **docker-compose.yml** file with the following YAML:
```yaml
version: "3.9"
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
MONGODB_ATLAS_PUBLIC_API_KEY: YOUR_PUBLIC_KEY_HERE
MONGODB_ATLAS_PRIVATE_API_KEY: YOUR_PRIVATE_KEY_HERE
MONGODB_ATLAS_ORG_ID: YOUR_ORG_ID_HERE
MONGODB_ATLAS_PROJECT_ID: YOUR_PROJECT_ID_HERE
MONGODB_CLUSTER_NAME: examples
MONGODB_USERNAME: demo
MONGODB_PASSWORD: password1234
# MONGODB_DATABASE: sample_mflix
# MONGODB_COLLECTION: movies
CLEANUP_ONDESTROY: true
```
You'll want to swap the environment variable values with your own. In the above example, the database and collection variables are commented out so the defaults would be used in the ZX script.
To see everything in action, execute the following from the command line on the host computer:
```bash
docker-compose up
```
The above command will use the **docker-compose.yml** file to build the Docker image if it doesn't already exist. The build process will bundle our files, install our dependencies, and obtain the MongoDB Atlas CLI. When Compose deploys a container from the image, the environment variables will be passed to the ZX script responsible for configuring MongoDB Atlas. When ready, the ZX script will run the Node.js application, further passing the environment variables. If the `CLEANUP_ONDESTROY` variable was set to `true`, when the container is stopped the database and collection will be removed.
## Conclusion
The [MongoDB Atlas CLI can be a powerful tool for bringing MongoDB Atlas to your local development experience on Docker. Essentially you would be swapping out a local version of MongoDB with Atlas CLI logic to manage a more feature-rich cloud version of MongoDB.
MongoDB Atlas enhances the MongoDB experience by giving you access to more features such as Atlas Search, Charts, and App Services, which allow you to build great applications with minimal effort. | md | {
"tags": [
"MongoDB",
"Bash",
"JavaScript",
"Docker",
"Node.js"
],
"pageDescription": "Learn how to use the MongoDB Atlas CLI with Docker in this example that includes JavaScript and Node.js.",
"contentType": "Tutorial"
} | Local Development with the MongoDB Atlas CLI and Docker | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/pytest-fixtures-and-pypi | created | # Testing and Packaging a Python Library
# Testing & Packaging a Python Library
This tutorial will show you how to build some helpful pytest fixtures for testing code that interacts with a MongoDB database. On top of that, I'll show how to package a Python library using the popular hatchling library, and publish it to PyPI.
This the second tutorial in a series! Feel free to check out the first tutorial if you like, but it's not necessary if you want to just read on.
## Coding with Mark?
This tutorial is loosely based on the second episode of a new livestream I host, called "Coding with Mark." I'm streaming on Wednesdays at 2 p.m. GMT (that's 9 a.m. Eastern or 6 a.m. Pacific, if you're an early riser!). If that time doesn't work for you, you can always catch up by watching the recording!
Currently, I'm building an experimental data access layer library that should provide a toolkit for abstracting complex document models from the business logic layer of the application that's using them.
You can check out the code in the project's GitHub repository!
## The problem with testing data
Testing is easier when the code you're testing is relatively standalone and can be tested in isolation. Sadly, code that works with data within MongoDB is at the other end of the spectrum — it's an integration test by definition because you're testing your integration with MongoDB.
You have two options when writing test that works with MongoDB:
- Mock out MongoDB, so instead of working with MongoDB, your code works with an object that just *looks like* MongoDB but doesn't really store data. mongomock is a good solution if you're following this technique.
- Work directly with MongoDB, but ensure the database is in a known state before your tests run (by loading test data into an empty database) and then clean up any changes you make after your tests are run.
The first approach is architecturally simpler — your tests don't run against MongoDB, so you don't need to configure or run a real MongoDB server. On the other hand, you need to manage an object that pretends to be a `MongoClient`, or a `Database`, or a `Collection`, so that it responds in accurate ways to any calls made against it. And because it's not a real MongoDB connection, it's easy to use those objects in ways that don't accurately reflect a real MongoDB connection.
My preferred approach is the latter: My tests will run against a real MongoDB instance, and I will have the test framework clean up my database after each run using transactions. This makes it harder to run the tests and they may run more slowly, but it should do a better job of highlighting real problems interacting with MongoDB itself.
### Some alternative approaches
Before I ran in and decided to write my own plugin for pytest, I decided to see what others have done before me. I am building my own ODM, after all — there's only so much room for Not Invented Here™ in my life. There are two reasonably popular pytest integrations for use with MongoDB: pytest-mongo and pytest-mongodb. Sadly, neither did quite what I wanted. But they both look good — if they do what *you* want, then I recommend using them.
### pytest-mongo
Pytest-mongo is a pytest plugin that enables you to test code that relies on a running MongoDB database. It allows you to specify fixtures for the MongoDB process and client, and it will spin up a MongoDB process to run tests against, if you configure it to do so.
### pytest-mongodb
Pytest-mongo is a pytest plugin that enables you to test code that relies on a database connection to a MongoDB and expects certain data to be present. It allows you to specify fixtures for database collections in JSON/BSON or YAML format. Under the hood, it uses mongomock to simulate a MongoDB connection, or you can use a MongoDB connection, if you prefer.
Both of these offer useful features — especially the ability to provide fixture data that's specified in files on disk. Pytest-mongo even provides the ability to clean up the database after each test! When I looked a bit further, though, it does this by deleting all the collections in the test database, which is not the behavior I was looking for.
I want to use MongoDB transactions to automatically roll back any changes that are made by each test. This way, the test won't actually commit any changes to MongoDB, and only the changes it would have made are rolled back, so the database will be efficiently left in the correct state after each test run.
## Pytest fixtures for MongoDB
I'm going to use pytest's fixtures feature to provide both a MongoDB connection object and a transaction session to each test that requires them. Behind the scenes, each fixture object will clean up after itself when it is finished.
### How fixtures work
Fixtures in pytest are defined as functions, usually in a file called `conftest.py`. The thing that often surprises people new to fixtures, however, is that pytest will magically provide them to any test function with a parameter with the same name as the fixture. It's a form of dependency injection and is probably easier to show than to describe:
```python
# conftest.py
def sample_fixture():
assert sample_fixture == "Hello, World"
```
As well as pytest providing fixture values to test functions, it will also do the same with other fixture functions. I'll be making use of this in the second fixture I write.
Fixtures are called once for their scope, and by default, a fixture's scope is "function" which means it'll be called once for each test function. I want my "session" fixture to be called (and rolled back) for each function, but it will be much more efficient for my "mongodb" client fixture to be called once per session — i.e., at the start of my whole test run.
The final bit of pytest fixture theory I want to explain is that if you want something cleaned up *after* a scope is over — for example, when the test function is complete — the easiest way to accomplish this is to write a generator function using yield instead of return, like this:
```python
def sample_fixture():
# Any code here will be executed *before* the test run
yield "Hello, World"
# Any code here will be executed *after* the test run
```
I don't know about you, but despite the magic, I really like this setup. It's nice and consistent, once you know how to use it.
### A MongoClient fixture
The first fixture I need is one that returns a MongoClient instance that is connected to a MongoDB cluster.
Incidentally, MongoDB Atlas Serverless clusters are perfect for this as they don't cost anything when you're not using them. If you're only running your tests a few times a day, or even less, then this could be a good way to save on hosting costs for test infrastructure.
I want to provide configuration to the test runner via an environment variable, `MDB_URI`, which will be the connection string provided by Atlas. In the future, I may want to provide the connection string via a command-line flag, which is something you can do with pytest, but I'll leave that to later.
As I mentioned before, the scope of the fixture should be "session" so that the client is configured once at the start of the test run and then closed at the end. I'm actually going to leave clean-up to Python, so I won't do that explicitly myself.
Here's the fixture:
```python
import pytest
import pymongo
import os
@pytest.fixture(scope="session")
def mongodb():
client = pymongo.MongoClient(os.environ"MDB_URI"])
assert client.admin.command("ping")["ok"] != 0.0 # Check that the connection is okay.
return client
```
The above code means that I can write a test that reads from a MongoDB cluster:
```python
# test_fixtures.py
def test_mongodb_fixture(mongodb):
""" This test will pass if MDB_URI is set to a valid connection string. """
assert mongodb.admin.command("ping")["ok"] > 0
```
### Transactions in MongoDB
As I mentioned, the fixture above is fine for reading from an existing database, but any changes made to the data would be persisted after the tests were finished. In order to correctly clean up after the test run, I need to start a transaction before the test run and then abort the transaction after the test run so that any changes are rolled back. This is how Django's test runner works with relational databases!
In MongoDB, to create a transaction, you first need to start a session which is done with the `start_session` method on the MongoClient object. Once you have a session, you can call its `start_transaction` method to start a transaction and its `abort_transaction` method to roll back any database updates that were run between the two calls.
One warning here: You *must* provide the session object to all your queries or they won't be considered part of the session you've started. All of this together looks like this:
```python
session = mongodb.start_session()
session.start_transaction()
my_collection.insert_one(
{"this document": "will be erased"},
session=session,
)
session.abort_transaction()
```
That's not too bad. Now, I'll show you how to wrap up that logic in a fixture.
### Wrapping up a transaction in a fixture
The fixture takes the code above, replaces the middle with a `yield` statement, and wraps it in a fixture function:
```python
@pytest.fixture
def rollback_session(mongodb):
session = mongodb.start_session()
session.start_transaction()
try:
yield session
finally:
session.abort_transaction()
```
This time, I haven't specified the scope of the fixture, so it defaults to "function" which means that the `abort_transaction` call will be made after each test function is executed.
Just to be sure that the test fixture both rolls back changes but also allows subsequent queries to access data inserted during the transaction, I have a test in my `test_docbridge.py` file:
```python
def test_update_mongodb(mongodb, rollback_session):
mongodb.docbridge.tests.insert_one(
{
"_id": "bad_document",
"description": "If this still exists, then transactions aren't working.",
},
session=rollback_session,
)
assert (
mongodb.docbridge.tests.find_one(
{"_id": "bad_document"}, session=rollback_session
)
!= None
)
```
Note that the calls to `insert_one` and `find_one` both provide the `rollback_session` fixture value as a `session` argument. If you forget it, unexpected things will happen!
## Packaging a Python library
Packaging a Python library has always been slightly daunting, and it's made more so by the fact that these days, the packaging ecosystem changes quite a bit. At the time of writing, a good back end for building Python packages is [hatchling from the Hatch project.
In broad terms, for a simple Python package, the steps to publishing your package are these:
- Describe your package.
- Build your package.
- Push the package to PyPI.
Before you go through these steps, it's worth installing the following packages into your development environment:
- build - used for installing your build dependencies and packaging your project
- twine - used for securely pushing your packages to PyPI
You can install both of these with:
```
python -m pip install –upgrade build twine
```
### Describing the package
First, you need to describe your project. Once upon a time, this would have required a `setup.py` file. These days, `pyproject.toml` is the way to go. I'm just going to link to the `pyproject.toml` file in GitHub. You'll see that the file describes the project. It lists `pymongo` as a dependency. It also states that "hatchling.build" is the build back end in a couple of lines toward the top of the file.
It's not super interesting, but it does allow you to do the next step...
### Building the package
Once you've described your project, you can build a distribution from it by running the following command:
```
$ python -m build
* Creating venv isolated environment...
* Installing packages in isolated environment... (hatchling)
* Getting build dependencies for sdist...
* Building sdist...
* Building wheel from sdist
* Creating venv isolated environment...
* Installing packages in isolated environment... (hatchling)
* Getting build dependencies for wheel...
* Building wheel...
Successfully built docbridge-0.0.1.tar.gz and docbridge-0.0.1-py3-none-any.whl
```
### Publishing to PyPI
Once the wheel and gzipped tarballs have been created, they can be published to PyPI (assuming the library name is still unique!) by running Twine:
```
$ python -m twine upload dist/*
Uploading distributions to https://upload.pypi.org/legacy/
Enter your username: bedmondmark
Enter your password:
Uploading docbridge-0.0.1-py3-none-any.whl
100% ━━━━━━━━━━━━━━━━━━━━ 6.6/6.6 kB • 00:00 • ?
Uploading docbridge-0.0.1.tar.gz
100% ━━━━━━━━━━━━━━━━━━━━8.5/8.5 kB • 00:00 • ?
View at:
https://pypi.org/project/docbridge/0.0.1/
```
And that's it! I don't know about you, but I always go and check that it really worked.
, and sometimes they're extended references!
I'm really excited about some of the abstraction building blocks I have planned, so make sure to read my next tutorial, or if you prefer, join me on the livestream at 2 p.m. GMT on Wednesdays!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt842ca6201f83fbce/659683fc2d261259bee75968/image1.png | md | {
"tags": [
"MongoDB",
"Python"
],
"pageDescription": "As part of the coding-with-mark series, see how to build some helpful pytest fixtures for testing code that interacts with a MongoDB database, and how to package a Python library using the popular hatchling library.",
"contentType": "Tutorial"
} | Testing and Packaging a Python Library | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/superduperdb-ai-development-with-mongodb | created | # Using SuperDuperDB to Accelerate AI Development on MongoDB Atlas Vector Search
## Introduction
Are you interested in getting started with vector search and AI on MongoDB Atlas but don’t know where to start? The journey can be daunting; developers are confronted with questions such as:
- Which model should I use?
- Should I go with an open or closed source?
- How do I correctly apply my model to my data in Atlas to create vector embeddings?
- How do I configure my Atlas vector search index correctly?
- Should I chunk my text or apply a vectorizing model to the text directly?
- How and where can I robustly serve my model to be ready for new searches, based on incoming text queries?
SuperDuperDB is an open-source Python project designed to accelerate AI development with the database and assist in answering such questions, allowing developers to focus on what they want to build, without getting bogged down in the details of exactly how vector search and AI more generally are implemented.
SuperDuperDB includes computation of model outputs and model training which directly work with data in your database, as well as first-class support for vector search. In particular, SuperDuperDB supports MongoDB community and Atlas deployments.
You can follow along with the code below, but if you prefer, all of the code is available in the SuperDuperDB GitHub repository.
## Getting started with SuperDuperDB
SuperDuperDB is super-easy to install using pip:
```
python -m pip install -U superduperdbapis]
```
Once you’ve installed SuperDuperDB, you’re ready to connect to your MongoDB Atlas deployment:
```python
from superduperdb import superduper
db = superduper("mongodb+srv://:@...mongodb.net/documents")
```
The trailing characters after the last “/” denote the database you’d like to connect to. In this case, the database is called "documents." You should make sure that the user is authorized to access this database.
The variable `db` is a connector that is simultaneously:
- A database client.
- An artifact store for AI models (stores large file objects).
- A meta-data store, storing important information about your models as they relate to the database.
- A query interface allowing you to easily execute queries including vector search, without needing to explicitly handle the logic of converting the queries into vectors.
## Connecting SuperDuperDB with AI models
*Let’s see this in action.*
With SuperDuperDB, developers can import model wrappers that support a variety of open-source projects as well as AI API providers, such as OpenAI. Developers may even define and program their own models.
For example, to create a vectorizing model using the OpenAI API, first set your `OPENAI_API_KEY` as an environment variable:
```shell
export OPENAI_API_KEY="sk-..."
```
Now, simply import the OpenAI model wrapper:
```python
from superduperdb.ext.openai.model import OpenAIEmbedding
model = OpenAIEmbedding(
identifier='text-embedding-ada-002', model='text-embedding-ada-002')
```
To check this is working, you can apply this model to a single text snippet using the `predict`
method, specifying that this is a single data point with `one=True`.
```python
>>> model.predict('This is a test', one=True)
[-0.008146246895194054,
-0.0036965329200029373,
-0.0006024622125551105,
-0.005724836140871048,
-0.02455105632543564,
0.01614714227616787,
...]
```
Alternatively, we can also use an open-source model (not behind an API), using, for instance, the [`sentence-transformers` library:
```python
import sentence_transformers
from superduperdb.components.model import Model
```
```python
from superduperdb import vector
```
```python
model = Model(
identifier='all-MiniLM-L6-v2',
object=sentence_transformers.SentenceTransformer('all-MiniLM-L6-v2'),
encoder=vector(shape=(384,)),
predict_method='encode',
postprocess=lambda x: x.tolist(),
batch_predict=True,
)
```
This code snippet uses the base `Model` wrapper, which supports arbitrary model class instances, using both open-sourced and in-house code. One simply supplies the class instance to the object parameter, optionally specifying `preprocess` and/or `postprocess` functions. The `encoder` argument tells Atlas Vector Search what size the outputs of the model are, and the `batch_predict=True` option makes computation quicker.
As before, we can test the model:
```python
>>> model.predict('This is a test', one=True)
-0.008146246895194054,
-0.0036965329200029373,
-0.0006024622125551105,
-0.005724836140871048,
-0.02455105632543564,
0.01614714227616787,
...]
```
## Inserting and querying data via SuperDuperDB
Let’s add some data to MongoDB using the `db` connection. We’ve prepared some data from the PyMongo API to add a meta twist to this walkthrough. You can download this data with this command:
```shell
curl -O https://superduperdb-public.s3.eu-west-1.amazonaws.com/pymongo.json
```
```python
import json
from superduperdb.backends.mongodb.query import Collection
from superduperdb.base.document import Document as D
with open('pymongo.json') as f:
data = json.load(f)
db.execute(
Collection('documents').insert_many([D(r) for r in data])
)
```
You’ll see from this command that, in contrast to `pymongo`, `superduperdb`
includes query objects (`Collection(...)...`). This allows `superduperdb` to pass the queries around to models, computations, and training runs, as well as save the queries for future use.\
Other than this fact, `superduperdb` supports all of the commands that are supported by the core `pymongo` API.
Here is an example of fetching some data with SuperDuperDB:
```python
>>> r = db.execute(Collection('documents').find_one())
>>> r
Document({
'key': 'pymongo.mongo_client.MongoClient',
'parent': None,
'value': '\nClient for a MongoDB instance, a replica set, or a set of mongoses.\n\n',
'document': 'mongo_client.md',
'res': 'pymongo.mongo_client.MongoClient',
'_fold': 'train',
'_id': ObjectId('652e460f6cc2a5f9cc21db4f')
})
```
You can see that the usual data from MongoDB is wrapped with the `Document` class.
You can recover the unwrapped document with `unpack`:
```python
>>> r.unpack()
{'key': 'pymongo.mongo_client.MongoClient',
'parent': None,
'value': '\nClient for a MongoDB instance, a replica set, or a set of mongoses.\n\n',
'document': 'mongo_client.md',
'res': 'pymongo.mongo_client.MongoClient',
'_fold': 'train',
'_id': ObjectId('652e460f6cc2a5f9cc21db4f')}
```
The reason `superduperdb` uses the `Document` abstraction is that, in SuperDuperDB, you don't need to manage converting data to bytes yourself. We have a system of configurable and user-controlled types, or "Encoders," which allow users to insert, for example, images directly. *(This is a topic of an upcoming tutorial!)*
## Configuring models to work with vector search on MongoDB Atlas using SuperDuperDB
Now you have chosen and tested a model and inserted some data, you may configure vector search on MongoDB Atlas using SuperDuperDB. To do that, execute this command:
```python
from superduperdb import VectorIndex
from superduperdb import Listener
db.add(
VectorIndex(
identifier='pymongo-docs',
indexing_listener=Listener(
model=model,
key='value',
select=Collection('documents').find(),
predict_kwargs={'max_chunk_size': 1000},
),
)
)
```
This command tells `superduperdb` to do several things:
- Search the "documents" collection
- Set up a vector index on our Atlas cluster, using the text in the "value" field (Listener)
- Use the model variable to create vector embeddings
After receiving this command, SuperDuperDB:
- Configures a MongoDB Atlas knn-index in the "documents" collection.
- Saves the model object in the SuperDuperDB model store hosted on gridfs.
- Applies model to all data in the "documents" collection, and saves the vectors in the documents.
- Saves the fact that the model is connected to the "pymongo-docs" vector index.
If you’d like to “reload” your model in a later session, you can do this with the `load` command:
```python
>>> db.load("model", 'all-MiniLM-L6-v2')
```
To look at what happened during the creation of the VectorIndex, we can see that the individual documents now contain vectors:
```python
>>> db.execute(Collection('documents').find_one()).unpack()
{'key': 'pymongo.mongo_client.MongoClient',
'parent': None,
'value': '\nClient for a MongoDB instance, a replica set, or a set of mongoses.\n\n',
'document': 'mongo_client.md',
'res': 'pymongo.mongo_client.MongoClient',
'_fold': 'train',
'_id': ObjectId('652e460f6cc2a5f9cc21db4f'),
'_outputs': {'value': {'text-embedding-ada-002': [-0.024740776047110558,
0.013489063829183578,
0.021334229037165642,
-0.03423869237303734,
...]}}}
```
The outputs of models are always saved in the `"_outputs.."` path of the documents. This allows MongoDB Atlas Vector Search to know where to look to create the fast vector lookup index.
You can verify also that MongoDB Atlas has created a `knn` vector search index by logging in to your Atlas account and navigating to the search tab. It will look like this:
![The MongoDB Atlas UI, showing a list of indexes attached to the documents collection.][1]
The green ACTIVE status indicates that MongoDB Atlas has finished comprehending and “organizing” the vectors so that they may be searched quickly.
If you navigate to the **“...”** sign on **Actions** and click **edit with JSON editor**\*,\* then you can inspect the explicit index definition which was automatically configured by `superduperdb`:
![The MongoDB Atlas cluster UI, showing the vector search index details.][2]
You can confirm from this definition that the index looks into the `"_outputs.."` path of the documents in our collection.
## Querying vector search with a high-level API with SuperDuperDB
Now that our index is ready to go, we can perform some “search-by-meaning” queries using the `db` connection:
```python
>>> query = 'Query the database'
>>> result = db.execute(
... Collection('documents')
... .like(D({'value': query}), vector_index='pymongo-docs', n=5)
... .find({}, {'value': 1, 'key': 1})
... )
>>> for r in result:
... print(r.unpack())
{'key': 'find', 'value': '\nQuery the database.\n\nThe filter argument is a query document that all results\nmust match. For example:\n\n`pycon\n>>> db'}
{'key': 'database_name', 'value': '\nThe name of the database this command was run against.\n\n'}
{'key': 'aggregate', 'value': '\nPerform a database-level aggregation.\n\nSee the [aggregation pipeline
- GitHub
- Documentation
- Blog
- Example use cases and apps
- Slack community
- LinkedIn
- Twitter
- YouTube
## Contributors are welcome!
SuperDuperDB is open source and permissively licensed under the Apache 2.0 license. We would like to encourage developers interested in open-source development to contribute to our discussion forums and issue boards and make their own pull requests. We'll see you on GitHub!
## Become a Design Partner!
We are looking for visionary organizations we can help to identify and implement transformative AI applications for their business and products. We're offering this absolutely for free. If you would like to learn more about this opportunity, please reach out to us via email: .
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1ea0a942a4e805fc/65d63171c520883d647f9cb9/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5f3999da670dc6cd/65d631712e0c64553cca2ae4/image1.png | md | {
"tags": [
"Atlas",
"Python"
],
"pageDescription": "Discover how you can use SuperDuperDB to describe complex AI pipelines built on MongoDB Atlas Vector Search and state of the art LLMs.",
"contentType": "Article"
} | Using SuperDuperDB to Accelerate AI Development on MongoDB Atlas Vector Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/srv-connection-strings | created | # MongoDB 3.6: Here to SRV you with easier replica set connections
If you have logged into MongoDB Atlas
recently - and you should, the entry-level tier is free! - you may have
noticed a strange new syntax on 3.6 connection strings.
## MongoDB Seed Lists
What is this `mongodb+srv` syntax?
Well, in MongoDB 3.6 we introduced the concept of a seed
list
that is specified using DNS records, specifically
SRV and
TXT records. You will recall
from using replica sets with MongoDB that the client must specify at
least one replica set member (and may specify several of them) when
connecting. This allows a client to connect to a replica set even if one
of the nodes that the client specifies is unavailable.
You can see an example of this URL on a 3.4 cluster connection string:
Note that without the SRV record configuration we must list several
nodes (in the case of Atlas we always include all the cluster members,
though this is not required). We also have to specify the `ssl` and
`replicaSet` options.
With the 3.4 or earlier driver, we have to specify all the options on
the command line using the MongoDB URI
syntax.
The use of SRV records eliminates the requirement for every client to
pass in a complete set of state information for the cluster. Instead, a
single SRV record identifies all the nodes associated with the cluster
(and their port numbers) and an associated TXT record defines the
options for the URI.
## Reading SRV and TXT Records
We can see how this works in practice on a MongoDB Atlas cluster with a
simple Python script.
``` python
import srvlookup #pip install srvlookup
import sys
import dns.resolver #pip install dnspython
host = None
if len(sys.argv) > 1 :
host = sys.argv1]
if host :
services = srvlookup.lookup("mongodb", domain=host)
for i in services:
print("%s:%i" % (i.hostname, i.port))
for txtrecord in dns.resolver.query(host, 'TXT'):
print("%s: %s" % ( host, txtrecord))
else:
print("No host specified")
```
We can run this script using the node specified in the 3.6 connection
string as a parameter.
![The node is specified in the connection string
``` sh
$ python mongodb_srv_records.py
freeclusterjd-ffp4c.mongodb.net
freeclusterjd-shard-00-00-ffp4c.mongodb.net:27017
freeclusterjd-shard-00-01-ffp4c.mongodb.net:27017
freeclusterjd-shard-00-02-ffp4c.mongodb.net:27017
freeclusterjd-ffp4c.mongodb.net: "authSource=admin&replicaSet=FreeClusterJD-shard-0"
$
```
You can also do this lookup with nslookup:
``` sh
JD10Gen-old:~ jdrumgoole$ nslookup
> set type=SRV > \_mongodb._tcp.rs.joedrumgoole.com
Server: 10.65.141.1
Address: 10.65.141.1#53
Non-authoritative answer:
\_mongodb._tcp.rs.joedrumgoole.com service = 0 0 27022 rs1.joedrumgoole.com.
\_mongodb._tcp.rs.joedrumgoole.com service = 0 0 27022 rs2.joedrumgoole.com.
\_mongodb._tcp.rs.joedrumgoole.com service = 0 0 27022 rs3.joedrumgoole.com.
Authoritative answers can be found from:
> set type=TXT
> rs.joedrumgoole.com
Server: 10.65.141.1
Address: 10.65.141.1#53
Non-authoritative answer:
rs.joedrumgoole.com text = "authSource=admin&replicaSet=srvdemo"
```
You can see how this could be used to construct a 3.4 style connection
string by comparing it with the 3.4 connection string above.
As you can see, the complexity of the cluster and its configuration
parameters are stored in the DNS server and hidden from the end user. If
a node's IP address or name changes or we want to change the replica set
name, this can all now be done completely transparently from the
client's perspective. We can also add and remove nodes from a cluster
without impacting clients.
So now whenever you see `mongodb+srv` you know you are expecting a SRV
and TXT record to deliver the client connection string.
## Creating SRV and TXT records
Of course, SRV and TXT records are not just for Atlas. You can also
create your own SRV and TXT records for your self-hosted MongoDB
clusters. All you need for this is edit access to your DNS server so you
can add SRV and TXT records. In the examples that follow we are using
the AWS Route 53 DNS service.
I have set up a demo replica set on AWS with a three-node setup. They
are
``` sh
rs1.joedrumgoole.com
rs2.joedrumgoole.com
rs3.joedrumgoole.com
```
Each has a mongod process running on port 27022. I have set up a
security group that allows access to my local laptop and the nodes
themselves so they can see each other.
I also set up the DNS names for the above nodes in AWS Route 53.
We can start the mongod processes by running the following command on
each node.
``` sh
$ sudo /usr/local/m/versions/3.6.3/bin/mongod --auth --port 27022 --replSet srvdemo --bind_ip 0.0.0.0 --keyFile mdb_keyfile"
```
Now we need to set up the SRV and TXT records for this cluster.
The SRV record points to the server or servers that will comprise the
members of the replica set. The TXT record defines the options for the
replica set, specifically the database that will be used for
authorization and the name of the replica set. It is important to note
that the **mongodb+srv** format URI implicitly adds "ssl=true". In our
case SSL is not used for the demo so we have to append "&ssl=false" to
the client connector. Note that the SRV record is specifically designed
to look up the **mongodb** service referenced at the start of the URL.
The settings in AWS Route 53 are:
Which leads to the following entry in the zone file for Route 53.
Now we can add the TXT record. By convention, we use the same name as
the SRV record (`rs.joedrumgoole.com`) so that MongoDB knows where to
find the TXT record.
We can do this on AWS Route 53 as follows:
This will create the following TXT record.
Now we can access this service as :
``` sh
mongodb+srv://rs.joedrumgoole.com/test
```
This will retrieve a complete URL and connection string which can then
be used to contact the service.
The whole process is outlined below:
Once your records are set up, you can easily change port numbers without
impacting clients and also add and remove cluster members.
SRV records are another way in which MongoDB is making life easier for
database developers everywhere.
You should also check out full documentation on SRV and TXT records in
MongoDB
3.6.
You can sign up for a free MongoDB Atlas tier
which is suitable for single user use.
Find out how to use your favorite programming language with MongoDB via
our MongoDB drivers.
Please visit MongoDB University for
free online training in all aspects of MongoDB.
Follow Joe Drumgoole on twitter for
more news about MongoDB.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "SRV records are another way in which MongoDB is making life easier for database developers everywhere.",
"contentType": "News & Announcements"
} | MongoDB 3.6: Here to SRV you with easier replica set connections | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/audio-find-atlas-vector-search | created | # Audio Find - Atlas Vector Search for Audio
## Introduction
As we venture deeper into the realm of digital audio, the frontiers of music discovery are expanding. The pursuit for a more personalized audio experience has led us to develop a state-of-the-art music catalog system. This system doesn't just archive music; it understands it. By utilizing advanced sound embeddings and leveraging the power of MongoDB Atlas Vector Search, we've crafted an innovative platform that recommends songs not by genre or artist, but by the intrinsic qualities of the music itself.
This article was done together with a co-writer, Ran Shir, music composer and founder of Cues Assets , a production music group. We have researched and developed the following architecture to allow businesses to take advantage of their audio materials for searches.
### Demo video for the main flow
:youtube]{vid=RJRy0-kEbik}
## System architecture overview
At the heart of this music catalog is a Python service, intricately detailed in our Django-based views.py. This service is the workhorse for generating sound embeddings, using the Panns-inference model to analyze and distill the unique signatures of audio files uploaded by users. Here's how our sophisticated system operates:
**Audio file upload and storage:**
A user begins by uploading an MP3 file through the application's front end. This file is then securely transferred to Amazon S3, ensuring that the user's audio is stored safely in the cloud.
**Sound embedding generation:**
When an audio file lands in our cloud storage, our Django service jumps into action. It downloads the file from S3, using the Python requests library, into a temporary storage on the server to avoid any data loss during processing.
**Normalization and embedding processing:**
The downloaded audio file is then processed to extract its features. Using librosa, a Python library for audio analysis, the service loads the audio file and passes it to our Panns-inference model. The model, running on a GPU for accelerated computation, computes a raw 4096 members embedding vector which captures the essence of the audio.
**Embedding normalization:**
The raw embedding is then normalized to ensure consistent comparison scales when performing similarity searches. This normalization step is crucial for the efficacy of vector search, enabling a fair and accurate retrieval of similar songs.
**MongoDB Atlas Vector Search integration:**
The normalized embedding is then ready to be ingested by MongoDB Atlas. Here, it's indexed alongside the metadata of the audio file in the "embeddings" field. This indexing is what powers the vector search, allowing the application to perform a K-nearest neighbor (KNN) search to find and suggest the songs most similar to the one uploaded by the user.
**User interaction and feedback:**
Back on the front end, the application communicates with the user, providing status updates during the upload process and eventually serving the results of the similarity search, all in a user-friendly and interactive manner.
![Sound Catalog Similarity Architecture
This architecture encapsulates a blend of cloud technology, machine learning, and database management to deliver a unique music discovery experience that's as intuitive as it is revolutionary.
## Uploading and storing MP3 files
The journey of an MP3 file through our system begins the moment a user selects a track for upload. The frontend of the application, built with user interaction in mind, takes the first file from the dropped files and prepares it for upload. This process is initiated with an asynchronous call to an endpoint that generates a signed URL from AWS S3. This signed URL is a token of sorts, granting temporary permission to upload the file directly to our S3 bucket without compromising security or exposing sensitive credentials.
### Frontend code for file upload
The frontend code, typically written in JavaScript for a web application, makes use of the `axios` library to handle HTTP requests. When the user selects a file, the code sends a request to our back end to retrieve a signed URL. With this URL, the file can be uploaded to S3. The application handles the upload status, providing real-time feedback to the user, such as "Uploading..." and then "Searching based on audio..." upon successful upload. This interactive feedback loop is crucial for user satisfaction and engagement.
```javascript
async uploadFiles(files) {
const file = files0]; // Get the first file from the dropped files
if (file) {
try {
this.imageStatus = "Uploading...";
// Post a request to the backend to get a signed URL for uploading the file
const response = await axios.post('https://[backend-endpoint]/getSignedURL', {
fileName: file.name,
fileType: file.type
});
const { url } = response.data;
// Upload the file to the signed URL
const resUpload = await axios.put(url, file, {
headers: {
'Content-Type': file.type
}
});
console.log('File uploaded successfully');
console.log(resUpload.data);
this.imageStatus = "Searching based on image...";
// Post a request to trigger the audio description generation
const describeResponse = await axios.post('https://[backend-endpoint]/labelsToDescribe', {
fileName: file.name
});
const prompt = describeResponse.data;
this.searchQuery = prompt;
this.$refs.dropArea.classList.remove('drag-over');
if (prompt === "I'm sorry, I can't provide assistance with that request.") {
this.imageStatus = "I'm sorry, I can't provide assistance with that request."
throw new Error("I'm sorry, I can't provide assistance with that request.");
}
this.fetchListings();
// If the request is successful, show a success message
this.showSuccessPopup = true;
this.imageStatus = "Drag and drop an image here"
// Auto-hide the success message after 3 seconds
setTimeout(() => {
this.showSuccessPopup = false;
}, 3000);
} catch (error) {
console.error('File upload failed:', error);
// In case of an error, reset the UI and show an error message
this.$refs.dropArea.classList.remove('drag-over');
this.showErrorPopup = true;
// Auto-hide the error message after 3 seconds
setTimeout(() => {
this.showErrorPopup = false;
}, 3000);
// Reset the status message after 6 seconds
setTimeout(() => {
this.imageStatus = "Drag and drop an image here"
}, 6000);
}
}
}
```
### Backend Code for Generating Signed URLs
On the backend, a Serverless function written for the MongoDB Realm platform interacts with AWS SDK. It uses stored AWS credentials to access S3 and create a signed URL, which it then sends back to the frontend. This URL contains all the necessary information for the file upload, including the file name, content type, and access control settings.
```javascript
// Serverless function to generate a signed URL for file uploads to AWS S3
exports = async function({ query, headers, body}, response) {
// Import the AWS SDK
const AWS = require('aws-sdk');
// Update the AWS configuration with your access keys and region
AWS.config.update({
accessKeyId: context.values.get('YOUR_AWS_ACCESS_KEY'), // Replace with your actual AWS access key
secretAccessKey: context.values.get('YOUR_AWS_SECRET_KEY'), // Replace with your actual AWS secret key
region: 'eu-central-1' // The AWS region where your S3 bucket is hosted
});
// Create a new instance of the S3 service
const s3 = new AWS.S3();
// Parse the file name and file type from the request body
const { fileName, fileType } = JSON.parse(body.text())
// Define the parameters for the signed URL
const params = {
Bucket: 'YOUR_S3_BUCKET_NAME', // Replace with your actual S3 bucket name
Key: fileName, // The name of the file to be uploaded
ContentType: fileType, // The content type of the file to be uploaded
ACL: 'public-read' // Access control list setting to allow public read access
};
// Generate the signed URL for the 'putObject' operation
const url = await s3.getSignedUrl('putObject', params);
// Return the signed URL in the response
return { 'url' : url }
};
```
## Sound embedding with Panns-inference model
Once an MP3 file is securely uploaded to S3, a Python service, which interfaces with our Django back end, takes over. This service is where the audio file is transformed into something more — a compact representation of its sonic characteristics known as a sound embedding. Using the librosa library, the service reads the audio file, standardizing the sample rate to ensure consistency across all files. The Panns-inference model then takes a slice of the audio waveform and infers its embedding.
```python
import tempfile
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from panns_inference import AudioTagging
import librosa
import numpy as np
import os
import json
import requests
# Function to normalize a vector
def normalize(v):
norm = np.linalg.norm(v)
return v / norm if norm != 0 else v
# Function to generate sound embeddings from an audio file
def get_embedding(audio_file):
# Initialize the AudioTagging model with the specified device
model = AudioTagging(checkpoint_path=None, device='gpu')
# Load the audio file with librosa, normalizing the sample rate to 44100
a, _ = librosa.load(audio_file, sr=44100)
# Add an extra dimension to the array to fit the model's input requirements
query_audio = a[None, :]
# Perform inference to get the embedding
_, emb = model.inference(query_audio)
# Normalize the embedding before returning
return normalize(emb[0])
# Django view to handle the POST request for downloading and embedding
@csrf_exempt
def download_and_embed(request):
if request.method == 'POST':
try:
# Parse the request body to get the file name
body_data = json.loads(request.body.decode('utf-8'))
file_name = body_data.get('file_name')
# If the file name is not provided, return an error
if not file_name:
return JsonResponse({'error': 'Missing file_name in the request body'}, status=400)
# Construct the file URL (placeholder) and send a request to get the file
file_url = f"https://[s3-bucket-url].amazonaws.com/{file_name}"
response = requests.get(file_url)
# If the file is successfully retrieved
if response.status_code == 200:
# Create a temporary file to store the downloaded content
with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as temp_audio_file:
temp_audio_file.write(response.content)
temp_audio_file.flush()
# Log the temporary file's name and size for debugging
print(f"Temp file: {temp_audio_file.name}, size: {os.path.getsize(temp_audio_file.name)}")
# Generate the embedding for the downloaded file
embedding = get_embedding(temp_audio_file.name)
# Return the embedding as a JSON response
return JsonResponse({'embedding': embedding.tolist()})
else:
# If the file could not be downloaded, return an error
return JsonResponse({'error': 'Failed to download the file'}, status=400)
except json.JSONDecodeError:
# If there is an error in the JSON data, return an error
return JsonResponse({'error': 'Invalid JSON data in the request body'}, status=400)
# If the request method is not POST, return an error
return JsonResponse({'error': 'Invalid request'}, status=400)
```
### Role of Panns-inference model
The [Panns-inference model is a deep learning model trained to understand and capture the nuances of audio content. It generates a vector for each audio file, which is a numerical representation of the file's most defining features. This process turns a complex audio file into a simplified, quantifiable form that can be easily compared against others.
For more information and setting up this model see the following github example.
## Vector search with MongoDB Atlas
**Storing and indexing embeddings in MongoDB Atlas**
MongoDB Atlas is where the magic of searchability comes to life. The embeddings generated by our Python service are stored in a MongoDB Atlas collection. Atlas, with its robust indexing capabilities, allows us to index these embeddings efficiently, enabling rapid and accurate vector searches.
This is the index definition used on the “songs” collection:
```json
{
"mappings": {
"dynamic": false,
"fields": {
"embeddings": {
"dimensions": 4096,
"similarity": "dotProduct",
"type": "knnVector"
},
"file": {
"normalizer": "none",
"type": "token"
}
}
}
}
```
The "file" field is indexed with a "token" type for file name filtering logic, explained later in the article.
**Songs collection sample document:**
```json
{
_id : ObjectId("6534dd09164a19b0ac1f7311"),
file : "Glorious Outcame Full Mix.mp3",
embeddings : Array (4096)]
}
```
### Vector search functionality
Vector search in MongoDB Atlas employs a K-nearest neighbor (KNN) algorithm to find the closest embeddings to the one provided by the user's uploaded file. When a user initiates a search, the system queries the Atlas collection, searching through the indexed embeddings to find and return a list of songs with the most similar sound profiles.
This combination of technologies — from the AWS S3 storage and signed URL generation to the processing power of the Panns-inference model, all the way to the search capabilities of MongoDB Atlas — creates a seamless experience. Users can not only upload their favorite tracks but also discover new ones that carry a similar auditory essence, all within an architecture built for scale, speed, and accuracy.
### Song Lookup and similarity search
**'“Get Songs” functionality**
The “Get Songs” feature is the cornerstone of the music catalog, enabling users to find songs with a similar auditory profile to their chosen track. When a user uploads a song, the system doesn't just store the file; it actively searches for and suggests tracks with similar sound embeddings. This is achieved through a similarity search, which uses the sound embeddings stored in the MongoDB Atlas collection.
```javascript
// Serverless function to perform a similarity search on the 'songs' collection in MongoDB Atlas
exports = async function({ query, body }, response) {
// Initialize the connection to MongoDB Atlas
const mongodb = context.services.get('mongodb-atlas');
// Connect to the specific database
const db = mongodb.db('YourDatabaseName'); // Replace with your actual database name
// Connect to the specific collection within the database
const songsCollection = db.collection('YourSongsCollectionName'); // Replace with your actual collection name
// Parse the incoming request body to extract the embedding vector
const parsedBody = JSON.parse(body.text());
console.log(JSON.stringify(parsedBody)); // Log the parsed body for debugging
// Perform a vector search using the parsed embedding vector
let foundSongs = await songs.aggregate([
{ "$vectorSearch": {
"index" : "default",
"queryVector": parsedBody.embedding,
"path": "embeddings",
"numCandidates": 15,
"limit" : 15
}
}
]).toArray()
// Map the found songs to a more readable format by stripping unnecessary path components
let searchableSongs = foundSongs.map((song) => {
// Extract a cleaner, more readable song title
let shortName = song.name.replace('.mp3', '');
return shortName.replace('.wav', ''); // Handle both .mp3 and .wav file extensions
});
// Prepare an array of $unionWith stages to combine results from multiple collections if needed
let unionWithStages = searchableSongs.slice(1).map((songTitle) => {
return {
$unionWith: {
coll: 'RelatedSongsCollection', // Name of the other collection to union with
pipeline: [
{ $match: { "songTitleField": songTitle } }, // Match the song titles against the related collection
],
},
};
});
// Execute the aggregation query with a $match stage for the first song, followed by any $unionWith stages
const relatedSongsCollection = db.collection('YourRelatedSongsCollectionName'); // Replace with your actual related collection name
const locatedSongs = await relatedSongsCollection.aggregate([
{ $match: { "songTitleField": searchableSongs[0] } }, // Start with the first song's match stage
...unionWithStages, // Include additional stages for related songs
]).toArray();
// Return the array of located songs as the response
return locatedSongs;
};
```
Since embeddings are stored together with the songs data we can use the embedding field when performing a lookup of nearest N neighbours. This approach implements the "More Like This" button.
```javascript
// Get input song 3 neighbours which are not itself. "More Like This"
let foundSongs = await songs.aggregate([
{ "$vectorSearch": {
"index" : "default",
"queryVector": songDetails.embeddings,
"path": "embeddings",
"filter" : { "file" : { "$ne" : fullSongName}},
"numCandidates": 15,
"limit" : 3
}}
]).toArray()
```
The code [filter out the searched song itself.
## Backend code for similarity search
The backend code responsible for the similarity search is a serverless function within MongoDB Atlas. It executes an aggregation pipeline that begins with a vector search stage, leveraging the `$vectorSearch` operator with `queryVector` to perform a K-nearest neighbor search. The search is conducted on the "embeddings" field, comparing the uploaded track's embedding with those in the collection to find the closest matches. The results are then mapped to a more human-readable format, omitting unnecessary file path information for the user's convenience.
```javascript
let foundSongs = await songs.aggregate(
{ "$vectorSearch": {
"index" : "default",
"queryVector": parsedBody.embedding,
"path": "embeddings",
"numCandidates": 15,
"limit" : 15
}
}
]).toArray()
```
## Frontend functionality
**Uploading and searching for similar songs**
The front end provides a drag-and-drop interface for users to upload their MP3 files easily. Once a file is selected and uploaded, the front end communicates with the back end to initiate the search for similar songs based on the generated embedding. This process is made transparent to the user through real-time status updates.
** User Interface and Feedback Mechanisms **
The user interface is designed to be intuitive, with clear indications of the current process — whether it's uploading, searching, or displaying results. Success and error popups inform the user of the status of their request. A success popup confirms the upload and successful search, while an error popup alerts the user to any issues that occurred during the process. These popups are designed to auto-dismiss after a short duration to keep the interface clean and user-friendly.
## Challenges and solutions
### Developmental challenges
One of the challenges faced was ensuring the seamless integration of various services, such as AWS S3, MongoDB Atlas, and the Python service for sound embeddings. Handling large audio files and processing them efficiently required careful consideration of file management and server resources.
### Overcoming the challenges
To overcome these issues, we utilized temporary storage for processing and optimized the Python service to handle large files without significant memory overhead. Additionally, the use of serverless functions within MongoDB Atlas allowed us to manage compute resources effectively, scaling with the demand as needed.
## Conclusion
This music catalog represents a fusion of cloud storage, advanced audio processing, and modern database search capabilities. It offers an innovative way to explore music by sound rather than metadata, providing users with a uniquely tailored experience.
Looking ahead, potential improvements could include enhancing the [Panns-inference model for even more accurate embedding generation and expanding the database to accommodate a greater variety of audio content. Further refinements to the user interface could also be made, such as incorporating user feedback to improve the recommendation algorithm continually.
Looking ahead, potential improvements could include enhancing the model for even more accurate embedding generation and expanding the database to accommodate a greater variety of audio content. Further refinements to the user interface could also be made, such as incorporating user feedback to improve the recommendation algorithm continually.
In conclusion, the system stands as a testament to the possibilities of modern audio technology and database management, offering users a powerful tool for music discovery and promising avenues for future development.
**Special Thanks:** Ran Shir and Cues Assets group for the work, research efforts and materials.
Want to continue the conversation? Meet us over in the MongoDB Community forums! | md | {
"tags": [
"Atlas",
"JavaScript",
"Python",
"AI",
"Django",
"AWS"
],
"pageDescription": "This in-depth article explores the innovative creation of a music catalog system that leverages the power of MongoDB Atlas's vector search and a Python service for sound embedding. Discover how sound embeddings are generated using the Panns-inference model via S3 hosted files, and how similar songs are identified, creating a dynamic and personalized audio discovery experience.",
"contentType": "Article"
} | Audio Find - Atlas Vector Search for Audio | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/query-analytics-part-1 | created | # Query Analytics Part 1: Know Your Queries
Do you know what your users are searching for? What they’re finding? Or not finding?
The quality of search results drives users toward or away from using a service. If you can’t find it, it doesn’t exist… or it may as well not exist. A lack of discoverability leads to a lost customer. A library patron can’t borrow a book they can’t find. The bio-medical researcher won’t glean insights from research papers or genetic information that is not in the search results. If users aren’t finding what they need, expect, or what delights them, they’ll go elsewhere.
As developers, we’ve successfully deployed full-text search into our application. We can clearly see that our test queries are able to match what we expect, and the relevancy of those test queries looks good. But as we know, our users immediately try things we didn’t think to try and account for and will be presented with results that may or may not be useful to them. If you’re selling items from your search results page and “Sorry, no results match your query” comes up, how much money have you _not_ made? Even more insidious are results for common queries that aren’t providing the best results you have to offer; while users get results, there might not be the desired product within quick and easy reach to click and buy now.
Having Atlas Search enabled and in production is really the beginning of your search journey and also the beginning of the value you’ll get out of a well-tuned, and monitored, search engine. Atlas Search provides Query Analytics, giving us actionable insights into the `$search` activity of our Atlas Search indexes.
Note: Query Analytics is available in public preview for all MongoDB Atlas clusters on an M10 or higher running MongoDB v5.0 or higher to view the analytics information for the tracked search terms in the Atlas UI. Atlas Search doesn't track search terms or display analytics for queries on free and shared-tier clusters.
Callout section: Atlas Search Query Analytics focuses entirely on the frequency and number of results returned from each $search call. There are also several search metrics available for operational monitoring of CPU, memory, index size, and other useful data points.
## Factors that influence search results quality
You might be thinking, “Hey, I thought this Atlas Search thing would magically make my search results work well — why aren’t the results as my users expect? Why do some seemingly reasonable queries return no results or not quite the best results?”
Consider these various types of queries of concern:
| Query challenge | Example |
| :-------- | :------- |
| Common name typos/variations | Jacky Chan, Hairy Potter, Dotcor Suess |
| Relevancy challenged | the purple rain, the the yes, there’s a band called that], to be or not to be |
| Part numbers, dimensions, measurements | ⅝” driver bit, 1/2" wrench, size nine dress, Q-36, Q36, Q 36 |
| Requests for assistance | Help!, support, want to return a product, how to redeem a gift card, fax number |
| Because you know better | cheap sushi [the user really wants “good” sushi, don’t recommend the cheap stuff], blue shoes [boost the brands you have in stock that make you the most money], best guitar for a beginner |
| Word stems | Find nemo, finds nemo, finding nemo |
| Various languages, character sets, romanization | Flughafen, integraçao,中文, ko’nichiwa |
| Context, such as location, recency, and preferences | MDB [boost most recent news of this company symbol], pizza [show me nearby and open restaurants] |
Consider the choices we made, or were dynamically made for us, when we built our Atlas Search index — specifically, the analyzer choices we make per string field. What we indexed determines what is searchable and in what ways it is searchable. A default `lucene.standard` analyzed field gives us pretty decent, language-agnostic “words” as searchable terms in the index. That’s the default and not a bad one. However, if your content is in a particular language, it may have some structural and syntactic rules that can be incorporated into the index and queries too. If you have part numbers, item codes, license plates, or other types of data that are precisely specified in your domain, users will enter them without the exact special characters, spacing, or case. Often, as developers or domain experts of a system, we don’t try the wrong or _almost_ correct syntax or format when testing our implementation, but our users do.
With the number of ways that search results can go astray, we need to be keeping a close eye on what our users are experiencing and carefully tuning and improving.
## Virtuous search query management cycle
Maintaining a healthy search-based system deserves attention to the kinds of challenges just mentioned. A healthy search system management cycle includes these steps:
1. (Re-)deploy search
2. Measure and test
3. Make adjustments
4. Go to 1, repeat
### (Re-)deploying search
How you go about re-deploying the adjustments will depend on the nature of the changes being made, which could involve index configuration and/or application or query adjustments.
Here’s where the [local development environment for Atlas could be useful, as a way to make configuration and app changes in a comfortable local environment, push the changes to a broader staging environment, and then push further into production when ready.
### Measure and test
You’ll want to have a process for analyzing the search usage of your system, by tracking queries and their results over time. Tracking queries simply requires the addition of `searchTerms` tracking information to your search queries, as in this template:
```
{
$search: {
"index": "",
"": {
},
"tracking": {
"searchTerms": ""
}
}
}
```
### Make adjustments
You’ve measured, twice even, and you’ve spotted a query or class of queries that need some fine-tuning. It’s part art and part science to tune queries, and with a virtuous search query management cycle in place to measure and adjust, you can have confidence that changes are improving the search results for you and your customers.
Now, apply these adjustments, test, repeat, adjust, re-deploy, test... repeat.
So far, we’ve laid the general rationale and framework for this virtuous cycle of query analysis and tuning feedback loop. Let’s now see what actionable insights can be gleaned from Atlas Search Query Analytics.
## Actionable insights
The Atlas Search Query Analytics feature provides two reports of search activity: __All Tracked Search Queries__ and __Tracked Search Queries with No Results__. Each report provides the top tracked “search terms” for a selected time period, from the last day up to the last 90 days.
Let’s talk about the significance of each report.
### All Tracked Search Queries
What are the most popular search terms coming through your system over the last month? This report rolls that up for you.
.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt66de710be3567815/6597dec2dc76629c3b7ebbf0/last_30_all_search_queries_chart.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt859876a30d03a803/6597dff21c5d7c16060f3a34/last_30_top_search_terms.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt70d62c57c413a8b6/6597e06ab05b9eccd9d73b49/search_terms_agg_pipeline.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6f6c0668307d3bb2/6597e0dc1c5d7ca8bc0f3a38/last_30_no_results.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Do you know what your users are searching for? Atlas Search Query Analytics, gives us actionable insights.",
"contentType": "Article"
} | Query Analytics Part 1: Know Your Queries | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-semantic-kernel | created | # Building AI Applications with Microsoft Semantic Kernel and MongoDB Atlas Vector Search
We are excited to announce native support for MongoDB Atlas Vector Search in Microsoft Semantic Kernel. With this integration, users can bring the power of LLMs (large language models) to their proprietary data securely, and build generative AI applications using RAG (retrieval-augmented generation) with programming languages like Python and C#. The accompanying tutorial will walk you through an example.
## What is Semantic Kernel?
Semantic Kernel is a polyglot, open-source SDK that lets users combine various AI services with their applications. Semantic Kernel uses connectors to allow you to swap out AI services without rewriting code. Components of Semantic Kernel include:
- AI services: Supports AI services like OpenAI, Azure OpenAI, and Hugging Face.
- Programming languages: Supports conventional programming languages like C# Python, and Java.
- Large language model (LLM) prompts: Supports the latest in LLM AI prompts with prompt templating, chaining, and planning capabilities.
- Memory: Provides different vectorized stores for storing data, including MongoDB.
## What is MongoDB Atlas Vector Search?
MongoDB Atlas Vector Search is a fully managed service that simplifies the process of effectively indexing high-dimensional vector embedding data within MongoDB and being able to perform fast vector similarity searches.
Embedding refers to the representation of words, phrases, or other entities as dense vectors in a continuous vector space. It's designed to ensure that words with similar meanings are grouped closer together. This method helps computer models better understand and process language by recognizing patterns and relationships between words and is what allows us to search by semantic meaning.
When data is converted into numeric vector embeddings using encoding models, these embeddings can be stored directly alongside their respective source data within the MongoDB database. This co-location of vector embeddings and the original data not only enhances the efficiency of queries but also eliminates potential synchronization issues. By avoiding the need to maintain separate databases or synchronization processes for the source data and its embeddings, MongoDB provides a seamless and integrated data retrieval experience.
This consolidated approach streamlines database management and allows for intuitive and sophisticated semantic searches, making the integration of AI-powered experiences easier.
## Microsoft Semantic Kernel and MongoDB
This combination enables developers to build AI-powered intelligent applications using MongoDB Atlas Vector Search and large language models from providers like OpenAI, Azure OpenAI, and Hugging Face.
Despite all their incredible capabilities, LLMs have a knowledge cutoff date and often need to be augmented with proprietary, up-to-date information for the particular business that an application is being built for. This “long-term memory for LLM” capability for AI-powered intelligent applications is typically powered by leveraging vector embeddings. Semantic Kernel allows for storing and retrieving this vector context for AI apps using the memory plugin (which now has support for MongoDB Atlas Vector Search).
## Tutorial
Atlas Vector Search is integrated in this tutorial to provide a way to interact with our memory store that was created through our MongoDB and Semantic Kernel connector.
This tutorial takes you through how to use Microsoft Semantic Kernel to properly upload and embed documents into your MongoDB Atlas cluster, and then conduct queries using Microsoft Semantic Kernel as well, all in Python!
## Pre-requisites
- MongoDB Atlas cluster
- IDE of your choice (this tutorial uses Google Colab — please refer to it if you’d like to run the commands directly)
- OpenAI API key
Let’s get started!
## Setting up our Atlas cluster
Visit the MongoDB Atlas dashboard and set up your cluster. In order to take advantage of the `$vectorSearch` operator in an aggregation pipeline, you need to run MongoDB Atlas 6.0.11 or higher. This tutorial can be built using a free cluster.
When you’re setting up your deployment, you’ll be prompted to set up a database user and rules for your network connection. Please ensure you save your username and password somewhere safe and have the correct IP address rules in place so your cluster can connect properly.
If you need more help getting started, check out our tutorial on MongoDB Atlas.
## Installing the latest version of Semantic Kernel
In order to be successful with our tutorial, let’s ensure we have the most up-to-date version of Semantic Kernel installed in our IDE. As of the creation of this tutorial, the latest version is 0.3.14. Please run this `pip` command in your IDE to get started:
```
!python -m pip install semantic-kernel==0.3.14.dev
```
Once it has been successfully run, you will see various packages being downloaded. Please ensure `pymongo` is downloaded in this list.
## Setting up our imports
Here, include the information about our OpenAI API key and our connection string.
Let’s set up the necessary imports:
```
import openai
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion, OpenAITextEmbedding
from semantic_kernel.connectors.memory.mongodb_atlas import MongoDBAtlasMemoryStore
kernel = sk.Kernel()
openai.api_key = '"))
kernel.import_skill(sk.core_skills.TextMemorySkill())
```
Importing in OpenAI is crucial because we are using their data model to embed not only our documents but also our queries. We also want to import their Text Embedding library for this same reason. For this tutorial, we are using the embedding model `ada-002`, but please double check that you’re using a model that is compatible with your OpenAI API key.
Our `MongoDBAtlasMemoryStore` class is very important as it’s the part that enables us to use MongoDB as our memory store. This means we can connect to the Semantic Kernel and have our documents properly saved and formatted in our cluster. For more information on this class, please refer to the repository.
This is also where you will need to incorporate your OpenAI API key along with your MongoDB connection string, and other important variables that we will use. The ones above are just a suggestion, but if they are changed while attempting the tutorial, please ensure they are consistent throughout. For help on accessing your OpenAI key, please read the section below.
### Generate your OpenAI key
In order to generate our embeddings, we will use the OpenAI API. First, we’ll need a secret key. To create your OpenAI key, you'll need to create an account. Once you have that, visit the OpenAI API and you should be greeted with a screen like the one below. Click on your profile icon in the top right of the screen to get the dropdown menu and select “View API keys”.
Here, you can generate your own API key by clicking the “Create new secret key” button. Give it a name and store it somewhere safe. This is all you need from OpenAI to use their API to generate your embeddings.
## The need for retrieval-augmented generation (RAG)
Retrieval-augmented regeneration, also known as RAG, is an NLP technique that can help improve the quality of large language models (LLMs). It’s an artificial intelligence framework for getting data from an external knowledge source. The memory store we are creating using Microsoft Semantic Kernel is an example of this. But why is RAG necessary? Let’s take a look at an example.
LLMs like OpenAI GPT-3.5 exhibit an impressive and wide range of skills. They are trained on the data available on the internet about a wide range of topics and can answer queries accurately. Using Semantic Kernel, let’s ask OpenAI’s LLM if Albert Einstein likes coffee:
```
# Wrap your prompt in a function
prompt = kernel.create_semantic_function("""
As a friendly AI Copilot, answer the question: Did Albert Einstein like coffee?
""")
print(prompt())
```
The output received is:
```
Yes, Albert Einstein was known to enjoy coffee. He was often seen with a cup of coffee in his hand and would frequently visit cafes to discuss scientific ideas with his colleagues over a cup of coffee.
```
Since this information was available on the public internet, the LLM was able to provide the correct answer.
But LLMs have their limitations: They have a knowledge cutoff (September 2021, in the case of OpenAI) and do not know about proprietary and personal data. They also have a tendency to hallucinate — that is, they may confidently make up facts and provide answers that may seem to be accurate but are actually incorrect. Here is an example to demonstrate this knowledge gap:
```
prompt = kernel.create_semantic_function("""
As a friendly AI Copilot, answer the question: Did I like coffee?
""")
print(prompt())
```
The output received is:
```
As an AI, I don't have personal preferences or experiences, so I can't say whether "I" liked coffee or not. However, coffee is a popular beverage enjoyed by many people around the world. It has a distinct taste and aroma that some people find appealing, while others may not enjoy it as much. Ultimately, whether someone likes coffee or not is a subjective matter and varies from person to person.
```
As you can see, there is a knowledge gap here because we don’t have our personal data loaded in OpenAI that our query can access. So let’s change that. Continue on through the tutorial to learn how to augment the knowledge base of the LLM with proprietary data.
## Add some documents into our MongoDB cluster
Once we have incorporated our MongoDB connection string and our OpenAI API key, we are ready to add some documents into our MongoDB cluster.
Please ensure you’re specifying the proper collection variable below that we set up above.
```
async def populate_memory(kernel: sk.Kernel) -> None:
# Add some documents to the semantic memory
await kernel.memory.save_information_async(
collection=MONGODB_COLLECTION, id="1", text="We enjoy coffee and Starbucks"
)
await kernel.memory.save_information_async(
collection=MONGODB_COLLECTION, id="2", text="We are Associate Developer Advocates at MongoDB"
)
await kernel.memory.save_information_async(
collection=MONGODB_COLLECTION, id="3", text="We have great coworkers and we love our teams!"
)
await kernel.memory.save_information_async(
collection=MONGODB_COLLECTION, id="4", text="Our names are Anaiya and Tim"
)
await kernel.memory.save_information_async(
collection=MONGODB_COLLECTION, id="5", text="We have been to New York City and Dublin"
)
```
Here, we are using the `populate_memory` function to define five documents with various facts about Anaiya and Tim. As you can see, the name of our collection is called “randomFacts”, we have specified the ID for each document (please ensure each ID is unique, otherwise you will get an error), and then we have included a text phrase we want to embed.
Once you have successfully filled in your information and have run this command, let’s add them to our cluster — aka let’s populate our memory! To do this, please run the command:
```
print("Populating memory...aka adding in documents")
await populate_memory(kernel)
```
Once this command has been successfully run, you should see the database, collection, documents, and their embeddings populate in your Atlas cluster. The screenshot below shows how the first document looks after running these commands.
Once the documents added to our memory have their embeddings, let’s set up our search index and ensure we can generate embeddings for our queries.
## Create a vector search index in MongoDB
In order to use the `$vectorSearch` operator on our data, we need to set up an appropriate search index. We’ll do this in the Atlas UI. Select the “Search" tab on your cluster and click “Create Search Index”.
We want to choose the "JSON Editor Option" and click "Next".
On this page, we're going to select our target database, `semantic-kernel`, and collection, `randomFacts`.
For this tutorial, we are naming our index `defaultRandomFacts`. The index will look like this:
```json
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "dotProduct",
"type": "knnVector"
}
}
}
}
```
The fields specify the embedding field name in our documents, `embedding`, the dimensions of the model used to embed, `1536`, and the similarity function to use to find K-nearest neighbors, `dotProduct`. It's very important that the dimensions in the index match that of the model used for embedding. This data has been embedded using the same model as the one we'll be using, but other models are available and may use different dimensions.
Check out our Vector Search documentation for more information on the index configuration settings.
## Query documents using Microsoft Semantic Kernel
In order to query your new documents hosted in your MongoDB cluster “memory” store, we can use the `memory.search_async` function. Run the following commands and watch the magic happen:
```
result = await kernel.memory.search_async(MONGODB_COLLECTION, 'What is my job title?')
print(f"Retrieved document: {result0].text}, {result[0].relevance}")
```
Now you can ask any question and get an accurate response!
Examples of questions asked and the results:
![the result of the question: What is my job title?
## Conclusion
In this tutorial, you have learned a lot of very useful concepts:
- What Microsoft Semantic Kernel is and why it’s important.
- How to connect Microsoft Semantic Kernel to a MongoDB Atlas cluster.
- How to add in documents to your MongoDB memory store (and embed them, in the process, through Microsoft Semantic Kernel).
- How to query your new documents in your memory store using Microsoft Semantic Kernel.
For more information on MongoDB Vector Search, please visit the documentation, and for more information on Microsoft Semantic Kernel, please visit their repository and resources.
If you have any questions, please visit our MongoDB Developer Community Forum. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Follow this comprehensive guide to getting started with Microsoft Semantic Kernel and MongoDB Atlas Vector Search.",
"contentType": "Tutorial"
} | Building AI Applications with Microsoft Semantic Kernel and MongoDB Atlas Vector Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-unity-persistence | created | # Saving Data in Unity3D Using Realm
(Part 5 of the Persistence Comparison Series)
We started this tutorial series by looking at Unity and .NET native ways to persist data, like `PlayerPrefs`, `File`, and the `BinaryReader` / `BinaryWriter`. In the previous part, we then continued on to external libraries and with that, databases. We looked at ``SQLite` as one example.
This time, we will look at another database. One that makes it very easy and intuitive to work with data: the Realm Unity SDK.
First, here is an overview over the complete series:
- Part 1: PlayerPrefs
- Part 2: Files
- Part 3: BinaryReader and BinaryWriter
- Part 4: SQLite
- Part 5: Realm Unity SDK *(this tutorial)*
Similar to the previous parts, this tutorial can also be found in our Unity examples repository on the persistence-comparison branch.
Each part is sorted into a folder. The four scripts we will be looking at in this tutorial are in the `Realm` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding.
## Example game
*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we're using the same example for all parts of the series, so that it's easier to see the differences between the approaches.*
The goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game.
Therefore, the example we'll be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.
A simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.
When you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.
You can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.
The scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.
```cs
using UnityEngine;
///
/// This script shows the basic structure of all other scripts.
///
public class HitCountExample : MonoBehaviour
{
// Keep count of the clicks.
SerializeField] private int hitCount; // 1
private void Start() // 2
{
// Read the persisted data and set the initial hit count.
hitCount = 0; // 3
}
private void OnMouseDown() // 4
{
// Increment the hit count on each click and save the data.
hitCount++; // 5
}
}
```
The first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor.
Whenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.
The second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorial series.
## Realm
(See `HitCount.cs` and ``RealmExampleSimple.cs` in the repository for the finished version.)
Now that you have seen the example and the increasing hit counter, the next step will be to actually persist it so that it's available the next time we start the game.
As described in the [documentation, you can install Realm in two different ways:
- Install with NPM
- Manually Install a Tarball
Let's choose option #1 for this tutorial. The first thing we need to do is to import the Realm framework into Unity using the project settings.
Go to `Windows` → `Package Manager` → cogwheel in the top right corner → `Advanced Project Settings`:
Within the `Scoped Registries`, you can add the `Name`, `URL`, and `Scope` as follows:
This adds `NPM` as a source for libraries. The final step is to tell the project which dependencies to actually integrate into the project. This is done in the `manifest.json` file which is located in the `Packages` folder of your project.
Here you need to add the following line to the `dependencies`:
```json
"io.realm.unity": ""
```
Replace `` with the most recent Realm version found in https://github.com/realm/realm-dotnet/releases and you're all set.
The final `manifest.json` should look something like this:
```json
{
"dependencies": {
...
"io.realm.unity": "10.13.0"
},
"scopedRegistries":
{
"name": "NPM",
"url": "https://registry.npmjs.org/",
"scopes": [
"io.realm.unity"
]
}
]
}
```
When you switch back to Unity, it will reload the dependencies. If you then open the `Package Manager` again, you should see `Realm` as a new entry in the list on the left:
![Realm in Project Manager
We can now start using Realm in our Unity project.
Similar to other databases, we need to start by telling the Realm SDK how our database structure is supposed to look like. We have seen this in the previous tutorial with SQL, where we had to define tables and column for each class we want to save.
With Realm, this is a lot easier. We can just define in our code by adding some additional information to let know Realm how to read that code.
Look at the following definition of `HitCount`. You will notice that the super class for this one is `RealmObject` (1). When starting your game, Realm will automatically look for all sub classes of `RealmObject` and know that it needs to be prepared to persist this kind of data. This is all you need to do to get started when defining a new class. One additional thing we will do here, though, is to define which of the properties is the primary key. We will see why later. Do this by adding the attribute `PrimaryKey` to the `Id` property (2).
```cs
using Realms;
public class HitCount: RealmObject // 1
{
PrimaryKey] // 2
public int Id { get; set; }
public int Value { get; set; }
private HitCount() { }
public HitCount(int id)
{
Id = id;
}
}
```
With our data structure defined, we can now look at what we have to do to elevate our example game so that it persists data using Realm. Starting with the `HitCountExample.cs` as the blueprint, we create a new file `RealmExampleSimple.cs`:
```cs
using UnityEngine;
public class RealmExampleSimple : MonoBehaviour
{
[SerializeField] private int hitCount;
private void Start()
{
hitCount = 0;
}
private void OnMouseDown()
{
hitCount++;
}
}
```
First, we'll add two more fields — `realm` and `hitCount` — and rename the `SerializeField` to `hitCounter` to avoid any name conflicts:
```cs
[SerializeField] private int hitCounter = 0;
private Realm realm;
private HitCount hitCount;
```
Those two additional fields will let us make sure we reuse the same realm for load and save. The same holds true for the `HitCount` object we need to create when starting the scene. To do this, substitute the `Start()` method with the following:
```cs
void Start()
{
realm = Realm.GetInstance(); // 1
hitCount = realm.Find(1); // 2
if (hitCount != null) // 3
{
hitCounter = hitCount.Value;
}
else // 4
{
hitCount = new HitCount(1); // 5
realm.Write(() => // 6
{
realm.Add(hitCount);
});
}
}
```
A new Realm is created by calling `Realm.GetInstance()` (1). We can then use this `realm` object to handle all operations we need in this example. Start by searching for an already existing `HitCount` object. `Realm` offers a `Find<>` function (2) that let's you search for a specific class that was defined before. Additionally, we can pass long a primary key we want to look for. For this simple example, we will only ever need one `HitCount` object and will just assign the primary key `1` for it and also search for this one here.
There are two situations that can happen: If the game has been started before, the realm will return a `hitCount` object and we can use that to load the initial state of the `hitCounter` (3) using the `hitCount.Value`. The other possibility is that the game has not been started before and we need to create the `HitCount` object (4). To create a new object in Realm, you first create it the same way you would create any other object in C# (5). Then we need to add this object to the database. Whenever changes are made to the realm, we need to wrap these changes into a write block to make sure we're prevented from conflicting with other changes that might be going on — for example, on a different thread (6).
Whenever the capsule is clicked, the `hitCounter` gets incremented in `OnMouseDown()`. Here we need to add the change to the database, as well:
```cs
private void OnMouseDown()
{
hitCounter++;
realm.Write(() => // 8
{
hitCount.Value = hitCounter; // 7
});
}
```
Within `Start()`, we made sure to create a new `hitCount` object that can be used to load and save changes. So all we need to do here is to update the `Value` with the new `hitCounter` value (7). Note, as before, we need to wrap this change into a `Write` block to guarantee data safety.
This is all you need to do for your first game using Realm. Easy, isn't it?
Run it and try it out! Then we will look into how to extend this a little bit.
## Extended example
(See `HitCountExtended.cs` and ``RealmExampleExtended.cs` in the repository for the finished version.)
To make it easy to compare with the other parts of the series, all we will do in this section is add the key modifiers and save the three different versions:
- Unmodified
- Shift
- Control
As you will see in a moment, this small change is almost too simple to create a whole section around it, but it will also show you how easy it is to work with Realm as you go along in your project.
First, let's create a new `HitCountExtended.cs` so that we can keep and look at both strucutres side by side:
```cs
using Realms;
public class HitCountExtended : RealmObject
{
[PrimaryKey]
public int Id { get; set; }
public int Unmodified { get; set; } // 1
public int Shift { get; set; } // 2
public int Control { get; set; } // 3
private HitCountExtended() { }
public HitCountExtended(int id)
{
Id = id;
}
}
```
Compared to the `HitCount.cs`, we've renamed `Value` to `Unmodified` (1) and added `Shift` (2) as well as `Control` (3). That's all we need to do in the entity that will hold our data. How do we need to adjust the `MonoBehaviour`?
First, we'll update the outlets to the Unity editor (the `SerializeFields`) by replacing `hitCounter` with those three similar to the previous tutorials:
```cs
[SerializeField] private int hitCountUnmodified = 0;
[SerializeField] private int hitCountShift = 0;
[SerializeField] private int hitCountControl = 0;
```
Equally, we add a `KeyCode` field and use the `HitCountExtended` instead of the `HitCount`:
```cs
private KeyCode modifier = default;
private Realm realm;
private HitCountExtended hitCount;
```
Let's first adjust the loading of the data. Instead of searching for a `HitCount`, we now search for a `HitCountExtended`:
```cs
hitCount = realm.Find(1);
```
If it was found, we extract the three values and set it to the corresponding hit counters to visualize them in the Unity Editor:
```cs
if (hitCount != null)
{
hitCountUnmodified = hitCount.Unmodified;
hitCountShift = hitCount.Shift;
hitCountControl = hitCount.Control;
}
```
If no object was created yet, we will go ahead and create a new one like we did in the simple example:
```cs
else
{
hitCount = new HitCountExtended(1);
realm.Write(() =>
{
realm.Add(hitCount);
});
}
```
If you have worked through the previous tutorials, you've seen the `Update()` function already. It will be same for this tutorial as well since all it does it detect whichever key modifier is clicked, independent of the way we later on save that modifier:
```cs
private void Update()
{
// Check if a key was pressed.
if (Input.GetKey(KeyCode.LeftShift)) // 1
{
// Set the LeftShift key.
modifier = KeyCode.LeftShift;
}
else if (Input.GetKey(KeyCode.LeftControl)) // 2
{
// Set the LeftControl key.
modifier = KeyCode.LeftControl;
}
else
{
// In any other case reset to default and consider it unmodified.
modifier = default; // 3
}
}
```
The important bits here are the check for `LeftShift` and `LeftControl` which exist in the enum `KeyCode` (1+2). To check if one of those keys is pressed in the current frame (remember, `Update()` is called once per frame), we use `Input.GetKey()` (1+2) and pass in the key we're interested in. If none of those two keys is pressed, we use the `Unmodified` version, which is just `default` in this case (3).
The final part that has to be adjusted is the mouse click that increments the counter. Depending on the `modifier` that was clicked, we increase the corresponding `hitCount` like so:
```cs
switch (modifier)
{
case KeyCode.LeftShift:
hitCountShift++;
break;
case KeyCode.LeftControl:
hitCountControl++;
break;
default:
hitCountUnmodified++;
break;
}
```
After we've done this, we once again update the realm like we did in the simple example, this time updating all three fields in the `HitCountExtended`:
```cs
realm.Write(() =>
{
hitCount.Unmodified = hitCountUnmodified;
hitCount.Shift = hitCountShift;
hitCount.Control = hitCountControl;
});
```
With this, the modifiers are done for the Realm example and you can start the game and try it out.
## Conclusion
Persisting data in games leads you to many different options to choose from. In this tutorial, we've looked at Realm. It's an easy-to-use and -learn database that can be integrated into your game without much work. All we had to do was add it via NPM, define the objects we use in the game as `RealmObject`, and then use `Realm.Write()` to add and change data, along with `Realm.Find<>()` to retrieve data from the database.
There is a lot more that Realm can do that would go beyond the limits of what can be shown in a single tutorial.
You can find [more examples for local Realms in the example repository, as well. It contains examples for one feature you might ask for next after having worked through this tutorial: How do I synchronize my data between devices? Have a look at Realm Sync and some examples.
I hope this series gave you some ideas and insights on how to save and load data in Unity games and prepares you for the choice of which one to pick.
Please provide feedback and ask any questions in the Realm Community Forum. | md | {
"tags": [
"Realm",
"C#"
],
"pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.",
"contentType": "Tutorial"
} | Saving Data in Unity3D Using Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/getting-started-azure-app-service-atlas | created | # Getting Started with MongoDB Atlas, NodeJS, and Azure App Service
MongoDB Atlas and Azure are great friends! In fact, they became even better friends recently with the addition of the MongoDB Atlas Pay-as-You-Go Software as a Service (SaaS) subscription to the Azure Marketplace, allowing you to use your existing Azure credits to enjoy all the benefits of the MongoDB Atlas Developer Data Platform. So there is no better time to learn how you can take advantage of both of these.
In this article, we are going to see how you can deploy a MERN stack application to Azure Web Apps, part of Azure App Service, in a few simple steps. By the end of this, you will have your own version of the website that can be found here.
## Prerequisites
There are a few things you will need in place in order to follow this article.
1. Atlas Account and database cluster.
**N.B.** You can follow the Getting Started with Atlas guide, to learn how to create a free Atlas account, create your first free-forever cluster and get your all important Connection String to the database.
2. Azure Account.
3. Have the mern-stack-azure-deployment-example forked to your own account.
### Database Network Access
MongoDB Atlas comes with database level security out of the box. This includes not only the users who can connect but also where you can connect from. For this reason, you will need to configure network access rules for who or what can access your applications.
The most common connection technique is via IP address. If you wish to use this with Azure, you will need to allow access from anywhere inside Atlas as we cannot predict what your application IP addresses will be over time.
Atlas also supports the use of network peering and private connections using the major cloud providers. This includes Azure Private Link or Azure Virtual Private Connection (VPC) if you are using an M10 or above cluster.
## What’s the MERN Stack?
Before we get started deploying our MERN Stack application to Azure, it’s good to cover what the MERN Stack is.
MERN stands for MongoDB, Express, React, Node, and is named after the technologies that make up the stack.
* **MongoDB**: a general-purpose document database
* **Express**: Node.js web framework
* **React**: a client-side JavaScript framework
* **Node.js**: the most widely used JavaScript web server
## Create the Azure App Service
So we have the pieces in place we need, including a place to store data and an awesome MERN stack repo ready to go. Now we need to create our Azure App Service instance so we can take advantage of its deployment and hosting capabilities:
1. Inside the Azure Portal, in the search box at the top, search for *App Services* and select it.
2. Click Create to trigger the creation wizard.
3. Enter the following information:
- **Subscription**: Choose your preferred existing subscription.
***Note: When you first create an account, you are given a free trial subscription with $150 free credits you can use***
- **Resource Group**: Use an existing or click the *Create new* link underneath the box to create a new one.
- **Name**: Choose what you would like to call it. The name has to be unique as it is used to create a URL ending .azurewebsites.net but otherwise, the choice is yours.
- **Publish**: Code.
- **Runtime stack**: Node 18 LTS.-
- **OS**: Linux.
- **Region**: Pick the one closest to you.
- **Pricing Plans**: F1 - this is the free version.
4. Once you are happy, select Review + create in the bottom left.
5. Click Create in the bottom left and await deployment.
6. Once created, it will allow you to navigate to your new app service so we can start configuring it.
## Configuring our new App Service
Now that we have App Service set up, we need to add our connection string to our MongoDB Atlas cluster to app settings, so when deployed the application will be able to find the value and connect successfully.
1. From the left-side menu in the Azure Portal inside your newly created App Service, click Configuration under the Settings section.
2. We then need to add a new value in the Application Settings section. **NOT** the Connection String section, despite the name. Click the New application setting button under this section to add one.
3. Add the following values:
- **Name**: ATLAS_URI
- **Value**: Your Atlas connection string from the cluster you created earlier.
## Deploy to Azure App Services
We have our application, we have our app service and we have our connection string stored. Now it is time to link to our GitHub repo to take advantage of CI/CD goodness in Azure App Services.
1. Inside your app service app, click Deployment Center on the left in the Deployment section.
2. In the Settings tab that opens by default, from Source, select GitHub.
3. Fill out the boxes under the GitHub section that appears to select the main branch of your fork of the MERN stack repo.
4. Under Workflow Option: Make sure Add a workflow is the selected option.
5. Click Save at the top.
This will trigger a GitHub Actions build. If you view this in GitHub, you will see it will fail because we need to make some changes to the YAML file it created to allow it to build and deploy successfully.
### Configuring our GitHub Actions Workflow file
Now that we have connected GitHub Actions and App Services, there is a new folder in the GitHub repo called .github with a subfolder called workflows. This is where you will find the yaml files that App Services auto generated for us in the last section.
However, as mentioned, we need to adjust it slightly to work for us:
1. In the jobs section, there will be a sub section for the build job. Inside this we need to replace the whole steps section with the code found in this gist
- **N.B.** *The reason it is in a Gist is because indentation is really crucial in YAML and this makes sure the layout stays as it should be to make your life easier.*
2. As part of this, we have named our app ‘mern-app’ so we need to make sure this matches in the deploy step. Further down in the jobs section of the yaml file, you will find the deploy section and its own steps subsection. In the first name step, you will see a bit where it says node-app. Change this to mern-app. This associates the build and deploy apps.
That’s it! All you need to do now is commit the changes to the file. This will trigger a run of the GitHub Action workflow.
Once it builds successfully, you can go ahead and visit your website.
To find the URL of your website, visit the project inside the Azure Portal and in the Overview section you will find the link.
You should now have a working NodeJS application that uses MongoDB Atlas that is deployed to Azure App Services.
## Summary
You are now well on your way to success with Azure App Services, NodeJS and MongoDB Atlas!
In this article, we created an Azure App Service, added our connection string inside Azure and then linked it up to our existing MERN stack example repo in GitHub, before customizing the generated workflow file for our application. Super simple and shows what can be done with the power of the cloud and MongoDB’s Developer Data Platform!
Get started with Atlas on Azure today!
| md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js",
"Azure"
],
"pageDescription": "How to easily deploy a MERN Stack application to Azure App Service.",
"contentType": "Tutorial"
} | Getting Started with MongoDB Atlas, NodeJS, and Azure App Service | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/swift/full-stack-swift | created | # Building a Full Stack application with Swift
I recently revealed on Twitter something that may have come as a surprise to many of my followers from the Swift/iOS community: I had never written an iOS app before! I've been writing Swift for a few years now but have focused entirely on library development and server-side Swift.
A highly compelling feature of Swift is that it allows you to write an iOS app and a corresponding backend – a complete, end-to-end application – all in the same language. This is similar to how using Node.js for a web app backend allows you to write Javascript everywhere.
To test this out and learn about iOS development, I decided to build a full-stack application entirely in Swift. I settled on a familiar CRUD app I've created a web version of before, an application that allows the user to manage a list of kittens and information about them.
I chose to build the app using the following components:
* A backend server, written using the popular Swift web framework Vapor and using the MongoDB Swift driver via MongoDBVapor to store data in MongoDB
* An iOS application built with SwiftUI and using SwiftBSON to support serializing/deserializing data to/from extended JSON, a version of JSON with MongoDB-specific extensions to simplify type preservation
* A SwiftPM package containing the code I wanted to share between the two above components
I was able to combine all of this into a single code base with a folder structure as follows:
```
FullStackSwiftExample/
├── Models/
│ ├── Package.swift
│ └── Sources/
│ └── Models/
│ └── Models.swift
├── Backend/
│ ├── Package.swift
│ └── Sources/
│ ├── App/
│ │ ├── configure.swift
│ │ └── routes.swift
│ └── Run/
│ └── main.swift
└── iOSApp/
└── Kittens/
├── KittensApp.swift
├── Utilities.swift
├── ViewModels/
│ ├── AddKittenViewModel.swift
│ ├── KittenListViewModel.swift
│ └── ViewUpdateDeleteKittenViewModel.swift
└── Views/
├── AddKitten.swift
├── KittenList.swift
└── ViewUpdateDeleteKitten.swift
```
Overall, it was a great learning experience for me, and although the app is pretty basic, I'm proud of what I was able to put together! Here is the finished application, instructions to run it, and documentation on each component.
In the rest of this post, I'll discuss some of my takeaways from this experience.
## 1. Sharing data model types made it straightforward to consistently represent my data throughout the stack.
As I mentioned above, I created a shared SwiftPM package for any code I wanted to use both in the frontend and backend of my application. In that package, I defined `Codable` types modeling the data in my application, for example:
```swift
/**
* Represents a kitten.
* This type conforms to `Codable` to allow us to serialize it to and deserialize it from extended JSON and BSON.
* This type conforms to `Identifiable` so that SwiftUI is able to uniquely identify instances of this type when they
* are used in the iOS interface.
*/
public struct Kitten: Identifiable, Codable {
/// Unique identifier.
public let id: BSONObjectID
/// Name.
public let name: String
/// Fur color.
public let color: String
/// Favorite food.
public let favoriteFood: CatFood
/// Last updated time.
public let lastUpdateTime: Date
private enum CodingKeys: String, CodingKey {
// We store the identifier under the name `id` on the struct to satisfy the requirements of the `Identifiable`
// protocol, which this type conforms to in order to allow usage with certain SwiftUI features. However,
// MongoDB uses the name `_id` for unique identifiers, so we need to use `_id` in the extended JSON
// representation of this type.
case id = "_id", name, color, favoriteFood, lastUpdateTime
}
}
```
When you use separate code/programming languages to represent data on the frontend versus backend of an application, it's easy for implementations to get out of sync. But in this application, since the same exact model type gets used for the frontend **and** backend representations of kittens, there can't be any inconsistency.
Since this type conforms to the `Codable` protocol, we also get a single, consistent definition for a kitten's representation in external data formats. The formats used in this application are:
* Extended JSON, which the frontend and backend use to communicate via HTTP, and
* BSON, which the backend and MongoDB use to communicate
For a concrete example of using a model type throughout the stack, when a user adds a new kitten via the UI, the data flows through the application as follows:
1. The iOS app creates a new `Kitten` instance containing the user-provided data
1. The `Kitten` instance is serialized to extended JSON via `ExtendedJSONEncoder` and sent in a POST request to the backend
1. The Vapor backend deserializes a new instance of `Kitten` from the extended JSON data using `ExtendedJSONDecoder`
1. The `Kitten` is passed to the MongoDB driver method `MongoCollection.insertOne()`
1. The MongoDB driver uses its built-in `BSONEncoder` to serialize the `Kitten` to BSON and send it via the MongoDB wire protocol to the database
With all these transformations, it can be tricky to ensure that both the frontend and backend remain in sync in terms of how they model, serialize, and deserialize data. Using Swift everywhere and sharing these `Codable` data types allowed me to avoid those problems altogether in this app.
## 2. Working in a single, familiar language made the development experience seamless.
Despite having never built an iOS app before, I found my existing Swift experience made it surprisingly easy to pick up on the concepts I needed to implement the iOS portion of my application. I suspect it's more common that someone would go in the opposite direction, but I think iOS experience would translate well to writing a Swift backend too!
I used several Swift language features such as protocols, trailing closures, and computed properties in both the iOS and backend code. I was also able to take advantage of Swift's new built-in features for concurrency throughout the stack. I used the `async` APIs on `URLSession` to send HTTP requests from the frontend, and I used Vapor and the MongoDB driver's `async` APIs to handle requests on the backend. It was much easier to use a consistent model and syntax for concurrent, asynchronous programming throughout the application than to try to keep straight in my head the concurrency models for two different languages at once.
In general, using the same language really made it feel like I was building a single application rather than two distinct ones, and greatly reduced the amount of context-switching I had to do as I alternated between work on the frontend and backend.
## 3. SwiftUI and iOS development are really cool!
Many of my past experiences trying to cobble together a frontend for school or personal projects using HTML and Javascript were frustrating. This time around, the combination of using my favorite programming language and an elegant, declarative framework made writing the frontend very enjoyable. More generally, it was great to finally learn a bit about iOS development and what most people writing Swift and that I know from the Swift community do!
---
In conclusion, my first foray into iOS development building this full-stack Swift app was a lot of fun and a great learning experience. It strongly demonstrated to me the benefits of using a single language to build an entire application, and using a language you're already familiar with as you venture into programming in a new domain.
I've included a list of references below, including a link to the example application. Please feel free to get in touch with any questions or suggestions regarding the application or the MongoDB libraries listed below – the best way to get in touch with me and my team is by filing a GitHub issue or Jira ticket!
## References
* Example app source code
* MongoDB Swift driver and documentation
* MongoDBVapor and documentation
* SwiftBSON and documentation
* Vapor
* SwiftUI | md | {
"tags": [
"Swift",
"iOS"
],
"pageDescription": "Curious about mobile and server-side swift? Use this tutorial and example code!",
"contentType": "Code Example"
} | Building a Full Stack application with Swift | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/getting-started-mongodb-c | created | # Getting Started with MongoDB and C
# Getting Started with MongoDB and C
In this article we'll install the MongoDB C driver on macOS, and use this driver to write some sample console applications that can interact with your MongoDB data by performing basic CRUD operations. We'll use Visual Studio Code to type in the code and the command line to compile and run our programs. If you want to try it out now, all source code is in the GitHub repository.
## Table of contents
- Prerequisites
- Installation: VS Code, C Extensions, Xcode
- Installing the C Driver
- Hello World MongoDB!
- Setting up the client and pinging MongoDB Atlas
- Compiling and running our code
- Connecting to the database and listing all collections
- Creating a JSON object in C
- CRUD in MongoDB using the C driver
- Querying data
- Inserting a new document
- Deleting a document
- Updating a document
- Wrapping up
## Prerequisites
1. A MongoDB Atlas account with a cluster created.
2. The sample dataset loaded into the Atlas cluster (or you can modify the sample code to use your own database and collection).
3. Your machine’s IP address whitelisted. Note: You can add 0.0.0.0/0 as the IP address, which should allow access from any machine. This setting is not recommended for production use.
## VS Code, C extensions, Xcode
1. We will use Visual Studio Code, available in macOS, Windows, and Linux, because it has official support for C code. Just download and install the appropriate version.
2. We need the C extensions, which will be suggested when you open a C file for the first time. You can also open extensions and search for "C/C++" and install them. This will install several extensions: C/C++, C/C++ Themes, and CMake.
3. The last step is to make sure we have a C compiler. For that, either install Xcode from the Mac App Store or run in a terminal:
```bash
$ xcode-select --install
```
Although we can use CMake to build our C applications (and you have detailed instructions on how to do it), we'll use VS Code to type our code in and the terminal to build and run our programs.
## Installing the C driver
In macOS, if we have the package manager homebrew installed (which you should), then we just open a terminal and type in:
```bash
$ brew install mongo-c-driver
```
You can also download the source code and build the driver, but using brew is just way more convenient.
## Configuring VS Code extensions
To make autocomplete work in VS Code, we need to change the extension's config to make sure it "sees" these new libraries installed. We want to change our INCLUDE_PATH to allow both IntelliSense to check our code while typing it and be able to build our app from VS Code.
To do that, from VS Code, open the `.vscode` hidden folder, and then click on c_cpp_properties.json and add these lines:
```javascript
{
"configurations":
{
"name": "Mac",
"includePath": [
"/usr/local/include/libbson-1.0/**",
"/usr/local/include/libmongoc-1.0/**",
"${workspaceFolder}/**"
],
...
}
]
}
```
Now, open tasks.json and add these lines to the args array:
```
"-I/usr/local/include/libmongoc-1.0",
"-I/usr/local/include/libbson-1.0",
"-lmongoc-1.0",
"-lbson-1.0",`
```
With these, we're telling VS Code where to find the MongoDB C libraries so it can compile and check our code as we type.
# Hello World MongoDB!
The source code is available on [GitHub.
## Setting up the client and pinging MongoDB Atlas
Let’s start with a simple program that connects to the MongoDB Atlas cluster and pings the server. For that, we need to get the connection string (URI) to the cluster and add it in our code. The best way is to create a new environment variable with the key “MONGODB_URI” and value the connection string (URI). It’s a good practice to keep the connection string decoupled from the code, but in this example, for simplicity, we'll have our connection string hardcoded.
We include the MongoDB driver and send an "echo" command from our `main` function. This example shows us how to initialize the MongoDB C client, how to create a command, manipulate JSON (in this case, BCON, BSON C Object Notation), send a command, process the response and error, and release any memory used.
```c
// hello_mongo.c
#include
int main(int argc, char const *argv]) {
// your MongoDB URI connection string
const char *uri_string = "mongodb+srv://";
// MongoDB URI created from above string
mongoc_uri_t *uri;
// MongoDB Client, used to connect to the DB
mongoc_client_t *client;
// Command to be sent, and reply
bson_t *command, reply;
// Error management
bson_error_t error;
// Misc
char *str;
bool retval;
/*
* Required to initialize libmongoc's internals
*/
mongoc_init();
/*
* Optionally get MongoDB URI from command line
*/
if (argc > 1) {
uri_string = argv[1];
}
/*
* Safely create a MongoDB URI object from the given string
*/
uri = mongoc_uri_new_with_error(uri_string, &error);
if (!uri) {
fprintf(stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string, error.message);
return EXIT_FAILURE;
}
/*
* Create a new client instance, here we use the uri we just built
*/
client = mongoc_client_new_from_uri(uri);
if (!client) {
return EXIT_FAILURE;
}
/*
* Register the application name so we can track it in the profile logs
* on the server. This can also be done from the URI (see other examples).
*/
mongoc_client_set_appname(client, "connect-example");
/*
* Do work. This example pings the database and prints the result as JSON
* BCON == BSON C Object Notation
*/
command = BCON_NEW("ping", BCON_INT32(1));
// we run above command on our DB, using the client. We get reply and error
// (if any)
retval = mongoc_client_command_simple(client, "admin", command, NULL, &reply,
&error);
// mongoc_client_command_simple returns false and sets error if there are
// invalid arguments or a server or network error.
if (!retval) {
fprintf(stderr, "%s\n", error.message);
return EXIT_FAILURE;
}
// if we're here, there's a JSON response
str = bson_as_json(&reply, NULL);
printf("%s\n", str);
/*
* Clean up memory
*/
bson_destroy(&reply);
bson_destroy(command);
bson_free(str);
/*
* Release our handles and clean up libmongoc
*/
mongoc_uri_destroy(uri);
mongoc_client_destroy(client);
mongoc_cleanup();
return EXIT_SUCCESS;
}
```
## Compiling and running our code
Although we can use way more sophisticated methods to compile and run our code, as this is just a C source code file and we're using just a few dependencies, I'll just compile from command line using good ol' gcc:
```bash
gcc -o hello_mongoc hello_mongoc.c \
-I/usr/local/include/libbson-1.0
-I/usr/local/include/libmongoc-1.0 \
-lmongoc-1.0 -lbson-1.0
```
To run the code, just call the built binary:
```bash
./hello_mongo
```
In the [repo that accompanies this post, you'll find a shell script that builds and runs all examples in one go.
## Connecting to the database and listing all collections
Now that we have the skeleton of a C app, we can start using our database. In this case, we'll connect to the database` sample_mflix`, and we'll list all collections there.
After connecting to the database, we list all connections with a simple` for` loop after getting all collection names with `mongoc_database_get_collection_names`.
```c
if ((collection_names =
mongoc_database_get_collection_names(database, &error))) {
for (i = 0; collection_namesi]; i++) {
printf("%s\n", collection_names[i]);
}
}
```
The complete sample follows.
```c
// list_collections.c
#include
int main(int argc, char const *argv[]) {
// your MongoDB URI connection string
const char *uri_string = "mongodb+srv://";
// MongoDB URI created from above string
mongoc_uri_t *uri;
// MongoDB Client, used to connect to the DB
mongoc_client_t *client;
// Error management
bson_error_t error;
mongoc_database_t *database;
mongoc_collection_t *collection;
char **collection_names;
unsigned i;
/*
* Required to initialize libmongoc's internals
*/
mongoc_init();
/*
* Safely create a MongoDB URI object from the given string
*/
uri = mongoc_uri_new_with_error(uri_string, &error);
if (!uri) {
fprintf(stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string, error.message);
return EXIT_FAILURE;
}
/*
* Create a new client instance, here we use the uri we just built
*/
client = mongoc_client_new_from_uri(uri);
if (!client) {
return EXIT_FAILURE;
}
/*
* Register the application name so we can track it in the profile logs
* on the server. This can also be done from the URI (see other examples).
*/
mongoc_client_set_appname(client, "connect-example");
/*
* Get a handle on the database "db_name" and collection "coll_name"
*/
database = mongoc_client_get_database(client, "sample_mflix");
// getting all collection names, here we're not passing in any options
if ((collection_names = mongoc_database_get_collection_names_with_opts(
database, NULL, &error))) {
for (i = 0; collection_names[i]; i++) {
printf("%s\n", collection_names[i]);
}
} else {
fprintf(stderr, "Error: %s\n", error.message);
return EXIT_FAILURE;
}
/*
* Release our handles and clean up libmongoc
*/
mongoc_uri_destroy(uri);
mongoc_client_destroy(client);
mongoc_cleanup();
return EXIT_SUCCESS;
}
```
If we compile and run it, we'l get this output:
```
$ ./list_collections
sessions
users
theaters
movies
comments
```
## Creating a JSON object in C
Being a document-based database, creating JSON documents is crucial for any application that interacts with MongoDB. Being this is C code, we don't use JSON. Instead, we use BCON ([BSON C Object Notation, as mentioned above). To create a new document, we call `BCON_NEW`, and to convert it into a C string, we call `bson_as_canonical_extended_json.`
```c
// bcon.c
// https://mongoc.org/libmongoc/current/tutorial.html#using-bcon
#include
// Creating the JSON doc:
/*
{
born : ISODate("1906-12-09"),
died : ISODate("1992-01-01"),
name : {
first : "Grace",
last : "Hopper"
},
languages : "MATH-MATIC", "FLOW-MATIC", "COBOL" ],
degrees: [ { degree: "BA", school: "Vassar" },
{ degree: "PhD", school: "Yale" } ]
}
*/
int main(int argc, char *argv[]) {
struct tm born = {0};
struct tm died = {0};
bson_t *document;
char *str;
born.tm_year = 6;
born.tm_mon = 11;
born.tm_mday = 9;
died.tm_year = 92;
died.tm_mon = 0;
died.tm_mday = 1;
// document = BCON_NEW("born", BCON_DATE_TIME(mktime(&born) * 1000),
// "died", BCON_DATE_TIME(mktime(&died) * 1000),
// "name", "{",
// "first", BCON_UTF8("Grace"),
// "last", BCON_UTF8("Hopper"),
// "}",
// "languages", "[",
// BCON_UTF8("MATH-MATIC"),
// BCON_UTF8("FLOW-MATIC"),
// BCON_UTF8("COBOL"),
// "]",
// "degrees", "[",
// "{", "degree", BCON_UTF8("BA"), "school",
// BCON_UTF8("Vassar"), "}",
// "{", "degree", BCON_UTF8("PhD"),"school",
// BCON_UTF8("Yale"), "}",
// "]");
document = BCON_NEW("born", BCON_DATE_TIME(mktime(&born) * 1000), "died",
BCON_DATE_TIME(mktime(&died) * 1000), "name", "{",
"first", BCON_UTF8("Grace"), "last", BCON_UTF8("Hopper"),
"}", "languages", "[", BCON_UTF8("MATH-MATIC"),
BCON_UTF8("FLOW-MATIC"), BCON_UTF8("COBOL"), "]",
"degrees", "[", "{", "degree", BCON_UTF8("BA"), "school",
BCON_UTF8("Vassar"), "}", "{", "degree", BCON_UTF8("PhD"),
"school", BCON_UTF8("Yale"), "}", "]");
/*
* Print the document as a JSON string.
*/
str = bson_as_canonical_extended_json(document, NULL);
printf("%s\n", str);
bson_free(str);
/*
* Clean up allocated bson documents.
*/
bson_destroy(document);
return 0;
}
```
## CRUD in MongoDB using the C driver
Now that we've covered the basics of connecting to MongoDB, let's have a look at how to manipulate data.
## Querying data
Probably the most used function of any database is to retrieve data fast. In most use cases, we spend way more time accessing data than inserting or updating that same data. In this case, after creating our MongoDB client connection, we call `mongoc_collection_find_with_opts`, which will find data based on a query we can pass in. Once we have results, we can iterate through the returned cursor and do something with that data:
```c
// All movies from 1984!
BSON_APPEND_INT32(query, "year", 1984);
cursor = mongoc_collection_find_with_opts(collection, query, NULL, NULL);
while (mongoc_cursor_next(cursor, &query)) {
str = bson_as_canonical_extended_json(query, NULL);
printf("%s\n", str);
bson_free(str);
}
```
The complete sample follows.
```c
// find.c
#include "URI.h"
#include
int main(int argc, char const *argv[]) {
// your MongoDB URI connection string
const char *uri_string = MY_MONGODB_URI;
// MongoDB URI created from above string
mongoc_uri_t *uri;
// MongoDB Client, used to connect to the DB
mongoc_client_t *client;
// Error management
bson_error_t error;
mongoc_collection_t *collection;
char **collection_names;
unsigned i;
// Query object
bson_t *query;
mongoc_cursor_t *cursor;
char *str;
/*
* Required to initialize libmongoc's internals
*/
mongoc_init();
/*
* Safely create a MongoDB URI object from the given string
*/
uri = mongoc_uri_new_with_error(uri_string, &error);
if (!uri) {
fprintf(stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string, error.message);
return EXIT_FAILURE;
}
/*
* Create a new client instance, here we use the uri we just built
*/
client = mongoc_client_new_from_uri(uri);
if (!client) {
puts("Error connecting!");
return EXIT_FAILURE;
}
/*
* Register the application name so we can track it in the profile logs
* on the server. This can also be done from the URI (see other examples).
*/
mongoc_client_set_appname(client, "connect-example");
/*
* Get a handle on the database "db_name" and collection "coll_name"
*/
collection = mongoc_client_get_collection(client, "sample_mflix", "movies");
query = bson_new();
// All movies from 1984!
BSON_APPEND_INT32(query, "year", 1984);
cursor = mongoc_collection_find_with_opts(collection, query, NULL, NULL);
while (mongoc_cursor_next(cursor, &query)) {
str = bson_as_canonical_extended_json(query, NULL);
printf("%s\n", str);
bson_free(str);
}
/*
* Release our handles and clean up libmongoc
*/
bson_destroy(query);
mongoc_collection_destroy(collection);
mongoc_uri_destroy(uri);
mongoc_client_destroy(client);
mongoc_cleanup();
return EXIT_SUCCESS;
}
````
## Inserting a new document
OK, we know how to read data, but how about inserting fresh data in our MongoDB database? It's easy! We just create a BSON document to be inserted and call `mongoc_collection_insert_one.`
```c
doc = bson_new();
bson_oid_init(&oid, NULL);
BSON_APPEND_OID(doc, "_id", &oid);
BSON_APPEND_UTF8(doc, "name", "My super new picture");
if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {
fprintf(stderr, "%s\n", error.message);
}
```
The complete sample follows.
```c
// insert.c
#include "URI.h"
#include
int main(int argc, char const *argv[]) {
// your MongoDB URI connection string
const char *uri_string = MY_MONGODB_URI;
// MongoDB URI created from above string
mongoc_uri_t *uri;
// MongoDB Client, used to connect to the DB
mongoc_client_t *client;
// Error management
bson_error_t error;
mongoc_collection_t *collection;
char **collection_names;
unsigned i;
// Object id and BSON doc
bson_oid_t oid;
bson_t *doc;
char *str;
/*
* Required to initialize libmongoc's internals
*/
mongoc_init();
/*
* Safely create a MongoDB URI object from the given string
*/
uri = mongoc_uri_new_with_error(uri_string, &error);
if (!uri) {
fprintf(stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string, error.message);
return EXIT_FAILURE;
}
/*
* Create a new client instance, here we use the uri we just built
*/
client = mongoc_client_new_from_uri(uri);
if (!client) {
return EXIT_FAILURE;
}
/*
* Register the application name so we can track it in the profile logs
* on the server. This can also be done from the URI (see other examples).
*/
mongoc_client_set_appname(client, "connect-example");
/*
* Get a handle on the database "db_name" and collection "coll_name"
*/
collection = mongoc_client_get_collection(client, "sample_mflix", "movies");
doc = bson_new();
bson_oid_init(&oid, NULL);
BSON_APPEND_OID(doc, "_id", &oid);
BSON_APPEND_UTF8(doc, "name", "My super new picture");
if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {
fprintf(stderr, "%s\n", error.message);
} else {
printf("Document inserted!");
/*
* Print the document as a JSON string.
*/
str = bson_as_canonical_extended_json(doc, NULL);
printf("%s\n", str);
bson_free(str);
}
/*
* Release our handles and clean up libmongoc
*/
mongoc_collection_destroy(collection);
mongoc_uri_destroy(uri);
mongoc_client_destroy(client);
mongoc_cleanup();
return EXIT_SUCCESS;
}
````
## Deleting a document
To delete a document, we call `mongoc_collection_delete_one.` We need to pass in a document containing the query to restrict the documents we want to find and delete.
```c
doc = bson_new();
BSON_APPEND_OID(doc, "_id", &oid);
if (!mongoc_collection_delete_one(collection, doc, NULL, NULL, &error)) {
fprintf(stderr, "Delete failed: %s\n", error.message);
}
```
The complete sample follows.
```c
// delete.c
#include "URI.h"
#include
int main(int argc, char const *argv[]) {
// your MongoDB URI connection string
const char *uri_string = MY_MONGODB_URI;
// MongoDB URI created from above string
mongoc_uri_t *uri;
// MongoDB Client, used to connect to the DB
mongoc_client_t *client;
// Error management
bson_error_t error;
mongoc_collection_t *collection;
char **collection_names;
unsigned i;
// Object id and BSON doc
bson_oid_t oid;
bson_t *doc;
char *str;
/*
* Required to initialize libmongoc's internals
*/
mongoc_init();
/*
* Safely create a MongoDB URI object from the given string
*/
uri = mongoc_uri_new_with_error(uri_string, &error);
if (!uri) {
fprintf(stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string, error.message);
return EXIT_FAILURE;
}
/*
* Create a new client instance, here we use the uri we just built
*/
client = mongoc_client_new_from_uri(uri);
if (!client) {
return EXIT_FAILURE;
}
/*
* Register the application name so we can track it in the profile logs
* on the server. This can also be done from the URI (see other examples).
*/
mongoc_client_set_appname(client, "connect-example");
/*
* Get a handle on the database "db_name" and collection "coll_name"
*/
collection = mongoc_client_get_collection(client, "sample_mflix", "movies");
// Let's insert one document in this collection!
doc = bson_new();
bson_oid_init(&oid, NULL);
BSON_APPEND_OID(doc, "_id", &oid);
BSON_APPEND_UTF8(doc, "name", "My super new picture");
if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {
fprintf(stderr, "%s\n", error.message);
} else {
printf("Document inserted!");
/*
* Print the document as a JSON string.
*/
str = bson_as_canonical_extended_json(doc, NULL);
printf("%s\n", str);
bson_free(str);
}
bson_destroy(doc);
// Delete the inserted document!
doc = bson_new();
BSON_APPEND_OID(doc, "_id", &oid);
if (!mongoc_collection_delete_one(collection, doc, NULL, NULL, &error)) {
fprintf(stderr, "Delete failed: %s\n", error.message);
} else {
puts("Document deleted!");
}
/*
* Release our handles and clean up libmongoc
*/
mongoc_collection_destroy(collection);
mongoc_uri_destroy(uri);
mongoc_client_destroy(client);
mongoc_cleanup();
return EXIT_SUCCESS;
}
````
## Updating a document
Finally, to update a document, we need to provide the query to find the document to update and a document with the fields we want to change.
```c
query = BCON_NEW("_id", BCON_OID(&oid));
update =
BCON_NEW("$set", "{", "name", BCON_UTF8("Super new movie was boring"),
"updated", BCON_BOOL(true), "}");
if (!mongoc_collection_update_one(collection, query, update, NULL, NULL,
&error)) {
fprintf(stderr, "%s\n", error.message);
}
```
The complete sample follows.
```c
// update.c
#include "URI.h"
#include
int main(int argc, char const *argv[]) {
// your MongoDB URI connection string
const char *uri_string = MY_MONGODB_URI;
// MongoDB URI created from above string
mongoc_uri_t *uri;
// MongoDB Client, used to connect to the DB
mongoc_client_t *client;
// Error management
bson_error_t error;
mongoc_collection_t *collection;
char **collection_names;
unsigned i;
// Object id and BSON doc
bson_oid_t oid;
bson_t *doc;
// document to update and query to find it
bson_t *update = NULL;
bson_t *query = NULL;
char *str;
/*
* Required to initialize libmongoc's internals
*/
mongoc_init();
/*
* Safely create a MongoDB URI object from the given string
*/
uri = mongoc_uri_new_with_error(uri_string, &error);
if (!uri) {
fprintf(stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string, error.message);
return EXIT_FAILURE;
}
/*
* Create a new client instance, here we use the uri we just built
*/
client = mongoc_client_new_from_uri(uri);
if (!client) {
return EXIT_FAILURE;
}
/*
* Register the application name so we can track it in the profile logs
* on the server. This can also be done from the URI (see other examples).
*/
mongoc_client_set_appname(client, "connect-example");
/*
* Get a handle on the database "db_name" and collection "coll_name"
*/
collection = mongoc_client_get_collection(client, "sample_mflix", "movies");
// we create a new BSON Document
doc = bson_new();
bson_oid_init(&oid, NULL);
BSON_APPEND_OID(doc, "_id", &oid);
BSON_APPEND_UTF8(doc, "name", "My super new movie");
// Then we insert it in the movies collection
if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {
fprintf(stderr, "%s\n", error.message);
} else {
printf("Document inserted!\n");
/*
* Print the document as a JSON string.
*/
str = bson_as_canonical_extended_json(doc, NULL);
printf("%s\n", str);
bson_free(str);
// now we search for that document to update it
query = BCON_NEW("_id", BCON_OID(&oid));
update =
BCON_NEW("$set", "{", "name", BCON_UTF8("Super new movie was boring"),
"updated", BCON_BOOL(true), "}");
if (!mongoc_collection_update_one(collection, query, update, NULL, NULL,
&error)) {
fprintf(stderr, "%s\n", error.message);
} else {
printf("Document edited!\n");
str = bson_as_canonical_extended_json(update, NULL);
printf("%s\n", str);
}
}
/*
* Release our handles and clean up libmongoc
*/
if (doc) {
bson_destroy(doc);
}
if (query) {
bson_destroy(query);
}
if (update) {
bson_destroy(update);
}
mongoc_collection_destroy(collection);
mongoc_uri_destroy(uri);
mongoc_client_destroy(client);
mongoc_cleanup();
return EXIT_SUCCESS;
}
````
## Wrapping up
With this article, we covered the installation of the MongoDB C driver, configuring VS Code as our editor and setting up other tools. Then, we created a few console applications that connect to MongoDB Atlas and perform basic CRUD operations.
[Get more information about the C driver, and to try this code, the easiest way would be to register for a free MongoDB account. We can't wait to see what you build next!
| md | {
"tags": [
"Atlas",
"C"
],
"pageDescription": "In this article we'll install the MongoDB C driver on macOS, and use it to write some sample console applications that can interact with your MongoDB data by performing basic CRUD operations, using Visual Studio Code.",
"contentType": "Tutorial"
} | Getting Started with MongoDB and C | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/use-effectively-realm-in-xamarin-forms | created | # How to Use Realm Effectively in a Xamarin.Forms App
Taking care of persistence while developing a mobile application is fundamental nowadays. Even though mobile connection bandwidth, as well as coverage, has been steadily increasing over time, applications still are expected to work offline and in a limited connectivity environment.
This becomes even more cumbersome when working on applications that require a steady stream of data with the service in order to work effectively, such as collaborative applications.
Caching data coming from a service is difficult, but Realm can ease the burden by providing a very natural way of storing and accessing data. This in turn will make the application more responsive and allow the end user to work seamlessly regardless of the connection status.
The aim of this article is to show how to use Realm effectively, particularly in a Xamarin.Forms app. We will take a look at **SharedGroceries**, an app to share grocery lists with friends and family, backed by a REST API. With this application, we wanted to provide an example that would be simple but also somehow complete, in order to cover different common use cases. The code for the application can be found in the repository here.
Before proceeding, please note that this is not an introductory article to Realm or Xamarin.Forms, so we expect you to have some familiarity with both. If you want to get an introduction to Realm, you can take a look at the documentation for the Realm .NET SDK. The official documentation for Xamarin.Forms and MVVM are valuable resources to learn about these topics.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## The architecture
In this section, we are going to discuss the difference between the architecture of an application backed by a classic SQL database and the architecture of an application that uses Realm.
### Classic architecture
In an app backed by a classic SQL database, the structure of the application will be similar to the one shown in the diagram, where the arrows represent the dependency between different components of the application. The view model requests data from a repository that can retrieve it both from a remote data source (like a web service) when online and from a local database, depending on the situation. The repository also takes care of keeping the local database up to date with all the data retrieved from the web service.
This approach presents some issues:
* *Combining data coming from both the remote data source and the local one is difficult.* For example, when opening a view in an application for the first time, it's quite common to show locally cached data while the data coming from a web service is being fetched. In this case, it's not easy to synchronize the retrieval, as well as merge the data coming from both sources to present in the view.
* *The data coming from the local source is static.* The objects that are retrieved from the database are generally POCOs (plain old class object) and as such, they do not reflect the current state of the data present in the cache. For example, in order to keep the data shown to the user as fresh as possible, there could be a synchronization process in the background that is continuously retrieving data from the web service and inserting it into the database. It's quite complex to make this data available to the final user of the application, though, as with a classic SQL database we can get fresh data only with a new query, and this needs to be done manually, further increasing the need to coordinate different components of the application.
* *Pagination is hard.* Objects are fully loaded from the database upon retrieval, and this can cause performance issues when working with big datasets. In this case, pagination could be required to keep the application performant, but this is not easy to implement.
### Realm architecture
When working with Realm, instead, the structure of the application should be similar to the one in the diagram above.
In this approach, the realm is directly accessed from the view model, and not hidden behind a repository like before. When information is retrieved from the web service, it is inserted into the database, and the view model can update the UI thanks to notifications coming from the realm. In our architecture, we have decided to call *DataService* the entity responsible for the flow of the data in the application.
There are several advantages to this approach:
* *Single source of truth removes conflicts.* Because data is coming only from the realm, then there are no issues with merging and synchronizing data coming from multiple data sources on the UI. For example, when opening a view in an application for the first time, data coming from the realm is shown straight away. In the meantime, data from the web service is retrieved and inserted into the realm. This will trigger a notification in the view model that will update the UI accordingly.
* *Objects and collections are live*. This means that the data coming from the realm is always the latest available locally. There is no need to query again the database to get the latest version of the data as with an SQL database.
* *Objects and collections are lazily loaded.* This means that there is no need to worry about pagination, even when working with huge datasets.
* *Bindings.* Realm works out of the box with data bindings in Xamarin.Forms, greatly simplifying the use of the MVVM pattern.
As you can see in the diagram, the line between the view model and the DataService is dashed, to indicate that is optional. Due to the fact that the view model is showing only data coming from the realm, it does not actually need to have a dependency on the DataService, and the retrieval of data coming from the web service can happen independently. For example, the DataService could continuously request data to the web service to keep the data fresh, regardless of what is being shown to the user at a specific time. This continuous request approach can also be used a SQL database solution, but that would require additional synchronization and queries, as the data coming from the database is static. Sometimes, though, data needs to be exchanged with the web service in consequence of specific actions from the user—for example with pull-to-refresh—and in this case, the view model needs to depend on the DataService.
## SharedGroceries app
In this section, we are going to introduce our example application and how to run it.
SharedGroceries is a simple collaborative app that allows you to share grocery lists with friends and family, backed by a REST API. We have decided to use REST as it is quite a common choice and allowed us to create a service easily. We are not going to focus too much on the REST API service, as it is outside of the scope of this article.
Let's take a look at the application now. The screenshots here are taken from the iOS version of the application only, for simplicity:
* (a) The first page of the application is the login page, where the user can input their username and password to login.
* (b) After login, the user is presented with the shopping lists they are currently sharing. Additionally, the user can add a new list here.
* (c) When clicking on a row, it goes to the shopping list page that shows the content of such list. From here, the user can add and remove items, rename them, and check/uncheck them when they have been bought.
To run the app, you first need to run the web service with the REST API. In order to do so, open the `SharedGroceriesWebService` project, and run it. This should start the web service on `http://localhost:5000` by default. After that, you can simply run the `SharedGroceries` project that contains the code for the Xamarin.Forms application. The app is already configured to connect to the web service at the default address.
For simplicity, we do not cover the case of registering users, and they are all created already on the web service. In particular, there are three predefined users—`alice`, `bob`, and `charlie`, all with password set to `1234`—that can be used to access the app. A couple of shopping lists are also already created in the service to make it easier to test the application.
## Realm in practice
In this section, we are going to go into detail about the structure of the app and how to use Realm effectively. The structure follows the architecture that was described in the architecture section.
### Rest API
If we start from the lower part of the architecture schema, we have the `RestAPI` namespace that contains the code responsible for the communication with the web service. In particular, the `RestAPIClient` is making HTTP requests to the `SharedGroceriesWebService`. The data is exchanged in the form of DTOs (Data Transfer Objects), simple objects used for the serialization and deserialization of data over the network. In this simple app, we could avoid using DTOs, and direcly use our Realm model objects, but it's always a good idea to use specific objects just for the data transfer, as this allows us to have independence between the local persistence model and the service model. With this separation, we don't necessarily need to change our local model in case the service model changes.
Here you have the example of one of the DTOs in the app:
``` csharp
public class UserInfoDTO
{
public Guid Id { get; set; }
public string Name { get; set; }
public UserInfo ToModel()
{
return new UserInfo
{
Id = Id,
Name = Name,
};
}
public static UserInfoDTO FromModel(UserInfo user)
{
return new UserInfoDTO
{
Id = user.Id,
Name = user.Name,
};
}
}
```
`UserInfoDTO` is just a container used for the serialization/deserialization of data transmitted in the API calls, and contains methods for converting to and from the local model (in this case, the `UserInfo` class).
### RealmService
`RealmService` is responsible for providing a reference to a realm:
``` csharp
public static class RealmService
{
public static Realm GetRealm() => Realm.GetInstance();
}
```
The class is quite simple at the moment, as we are using the default configuration for the realm. Having a separate class becomes more useful, though, when we have a more complicated configuration for the realm, and we want avoid having code duplication.
Please note that the `GetRealm` method is creating a new realm instance when it is called. Because realm instances need to be used on the same thread where they have been created, this method can be used from everywhere in our code, without the need to worry about threading issues.
It's also important to dispose of realm instances when they are not needed anymore, especially on background threads.
### DataService
The `DataService` class is responsible for managing the flow of data in the application. When needed, the class requests data from the `RestAPIClient`, and then persists it in the realm. A typical method in this class would look like this:
``` csharp
public static async Task RetrieveUsers()
{
try
{
//Retrieve data from the API
var users = await RestAPIClient.GetAllUsers();
//Persist data in Realm
using var realm = RealmService.GetRealm();
realm.Write(() =>
{
realm.Add(users.Select(u => u.ToModel()), update: true);
});
}
catch (HttpRequestException) //Offline/Service is not reachable
{
}
}
```
The `RetrieveUsers` method is first retrieving the list of users (in the form of DTOs) from the Rest API, and then inserting them into the realm, after a conversion from DTOs to model objects. Here you can see the use of the `using` declaration to dispose of the realm at the end of the try block.
### Realm models
The definition of the model for Realm is generally straightforward, as it is possible to use a simple C# class as a model with very little modifications. In the following snippet, you can see the three model classes that we are using in SharedGroceries:
``` csharp
public class UserInfo : RealmObject
{
[PrimaryKey]
public Guid Id { get; set; }
public string Name { get; set; }
}
public class GroceryItem : EmbeddedObject
{
public string Name { get; set; }
public bool Purchased { get; set; }
}
public class ShoppingList : RealmObject
{
[PrimaryKey]
public Guid Id { get; set; } = Guid.NewGuid();
public string Name { get; set; }
public ISet Owners { get; }
public IList Items { get; }
}
```
The models are pretty simple, and strictly resemble the DTO objects that are retrieved from the web service. One of the few caveats when writing Realm model classes is to remember that collections (lists, sets, and dictionaries) need to be declared with a getter only property and the correspondent interface type (`IList`, `ISet`, `IDictionary`), as it is happening with `ShoppingList`.
Another thing to notice here is that `GroceryItem` is defined as an `EmbeddedObject`, to indicate that it cannot exist as an independent Realm object (and thus it cannot have a `PrimaryKey`), and has the same lifecycle of the `ShoppingList` that contains it. This implies that `GroceryItem`s get deleted when the parent `ShoppingList` is deleted.
### View models
We will now go through the two main view models in the app, and discuss the most important points. We are going to skip `LoginViewModel`, as it is not particularly interesting.
#### ShoppingListsCollectionViewModel
`ShoppingListsCollectionViewModel` is the view model backing `ShoppingListsCollectionPage`, the main page of the application, that shows the list of shopping lists for the current user. Let's take a look look at the main elements:
``` csharp
public class ShoppingListsCollectionViewModel : BaseViewModel
{
private readonly Realm realm;
private bool loaded;
public ICommand AddListCommand { get; }
public ICommand OpenListCommand { get; }
public IEnumerable Lists { get; }
public ShoppingList SelectedList
{
get => null;
set
{
OpenListCommand.Execute(value);
OnPropertyChanged();
}
}
public ShoppingListsCollectionViewModel()
{
//1
realm = RealmService.GetRealm();
Lists = realm.All();
AddListCommand = new AsyncCommand(AddList);
OpenListCommand = new AsyncCommand(OpenList);
}
internal override async void OnAppearing()
{
base.OnAppearing();
IDisposable loadingIndicator = null;
try
{
//2
if (!loaded)
{
//Page is appearing for the first time, sync with service
//and retrieve users and shopping lists
loaded = true;
loadingIndicator = DialogService.ShowLoading();
await DataService.TrySync();
await DataService.RetrieveUsers();
await DataService.RetrieveShoppingLists();
}
else
{
DataService.FinishEditing();
}
}
catch
{
await DialogService.ShowAlert("Error", "Error while loading the page");
}
finally
{
loadingIndicator?.Dispose();
}
}
//3
private async Task AddList()
{
var newList = new ShoppingList();
newList.Owners.Add(DataService.CurrentUser);
realm.Write(() =>
{
return realm.Add(newList, true);
});
await OpenList(newList);
}
private async Task OpenList(ShoppingList list)
{
DataService.StartEditing(list.Id);
await NavigationService.NavigateTo(new ShoppingListViewModel(list));
}
}
```
In the constructor of the view model (*1*), we are initializing `realm` and also `Lists`. That is a queryable collection of `ShoppingList` elements, representing all the shopping lists of the user. `Lists` is defined as a public property with a getter, and this allows to bind it to the UI, as we can see in `ShoppingListsCollectionPage.xaml`:
``` xml
```
The content of the page is a `ListView` whose `ItemsSource` is bound to `Lists` (*A*). This means that the rows of the `ListView` are actually bound to the elements of `Lists` (that is, a collection of `ShoppingList`). A little bit down, we can see that each of the rows of the `ListView` is a `TextCell` whose text is bound to the variable `Name` of `ShoppingList` (*B*). Together, this means that this page will show a row for each of the shopping lists, with the name of list in the row.
An important thing to know is that, behind the curtains, Realm collections (like `Lists`, in this case) implement `INotifyCollectionChanged`, and that Realm objects implement `INotifyPropertyChanged`. This means that the UI will get automatically updated whenever there is a change in the collection (for example, by adding or removing elements), as well as whenever there is a change in an object (if a property changes). This greatly simplifies using the MVVM pattern, as implementing those interfaces manually is a tedious and error-prone process.
Coming back to `ShoppingListsCollectionViewModel`, in `OnAppearing`, we can see how the Realm collection is actually populated. If the page has not been loaded before (*2*), we call the methods `DataService.RetrieveUsers` and `DataService.RetrieveShoppingLists`, that retrieve the list of users and shopping lists from the service and insert them into the realm. Due to the fact that Realm collections are live, `Lists` will notify the UI that its contents have changed, and the list on the screen will get populated automatically.
Note that there are also some more interesting elements here that are related to the synchronization of local data with the web service, but we will discuss them later.
Finally, we have the `AddList` and `OpenList` methods (*3*) that are invoked, respectively, when the *Add* button is clicked or when a list is clicked. The `OpenList` method just passes the clicked `list` to the `ShoppingListViewModel`, while `AddList` first creates a new empty list, adds the current user in the list of owners, adds it to the realm, and then opens the list.
#### ShoppingListViewModel
`ShoppingListViewModel` is the view model backing `ShoppingListPage`, the page that shows the content of a certain list and allows us to modify it:
``` csharp
public class ShoppingListViewModel : BaseViewModel
{
private readonly Realm realm;
public ShoppingList ShoppingList { get; }
public IEnumerable CheckedItems { get; }
public IEnumerable UncheckedItems { get; }
public ICommand DeleteItemCommand { get; }
public ICommand AddItemCommand { get; }
public ICommand DeleteCommand { get; }
public ShoppingListViewModel(ShoppingList list)
{
realm = RealmService.GetRealm();
ShoppingList = list;
//1
CheckedItems = ShoppingList.Items.AsRealmQueryable().Where(i => i.Purchased);
UncheckedItems = ShoppingList.Items.AsRealmQueryable().Where(i => !i.Purchased);
DeleteItemCommand = new Command(DeleteItem);
AddItemCommand = new Command(AddItem);
DeleteCommand = new AsyncCommand(Delete);
}
//2
private void AddItem()
{
realm.Write(() =>
{
ShoppingList.Items.Add(new GroceryItem());
});
}
private void DeleteItem(GroceryItem item)
{
realm.Write(() =>
{
ShoppingList.Items.Remove(item);
});
}
private async Task Delete()
{
var confirmDelete = await DialogService.ShowConfirm("Deletion",
"Are you sure you want to delete the shopping list?");
if (!confirmDelete)
{
return;
}
var listId = ShoppingList.Id;
realm.Write(() =>
{
realm.Remove(ShoppingList);
});
await NavigationService.GoBack();
}
}
```
As we will see in a second, the page is binding to two different collections, `CheckedItems` and `UncheckedItems`, that represent, respectively, the list of items that have been checked (purchased) and those that haven't been. In order to obtain those, `AsRealmQueryable` is called on `ShoppingList.Items`, to convert the `IList` to a Realm-backed query, that can be queried with LINQ.
The xaml code for the page can be found in `ShoppingListPage.xaml`. Here is the main content:
``` xml
```
This page is composed by an external `StackLayout` (A) that contains:
* (B) An `Editor` whose `Text` is bound to `ShoppingList.Name`. This allows the user to read and eventually modify the name of the list.
* (C) A bindable `StackLayout` that is bound to `UncheckedItems`. This is the list of items that need to be purchased. Each of the rows of the `StackLayout` are bound to an element of `UncheckedItems`, and thus to a `GroceryItem`.
* (D) A `Button` that allows us to add new elements to the list.
* (E) A separator (the `BoxView`) and a `Label` that describe how many elements of the list have been ticked, thanks to the binding to `CheckedItems.Count`.
* (F ) A bindable `StackLayout` that is bound to `CheckedItems`. This is the list of items that have been already purchased. Each of the rows of the `StackLayout` are bound to an element of `CheckedItems`, and thus to a `GroceryItem`.
If we focus our attention on on the `DataTemplate` of the first bindable `StackLayout`, we can see that each row is composed by three elements:
* (H) A `Checkbox` that is bound to `Purchased` of `GroceryItem`. This allows us to check and uncheck items.
* (I) An `Entry` that is bound to `Name` of `GroceryItem`. This allows us to change the name of the items.
* (J) A `Button` that, when clicked, executed the `DeleteItemCommand` command on the view model, with `GroceryItem` as argument. This allows us to delete an item.
Please note that for simplicity, we have decided to use a bindable `StackLayout` to display the items of the shopping list. In a production application, it could be necessary to use a view that supports virtualization, such as a `ListView` or `CollectionView`, depending on the expected amount of elements in the collection.
An interesting thing to notice is that all the bindings are actually two-ways, so they go both from the view model to the page and from the page to the view model. This, for example, allows the user to modify the name of a shopping list, as well as check and uncheck items. The view elements are bound directly to Realm objects and collections (`ShoppingList`, `UncheckedItems`, and `CheckedItems`), and so all these changes are automatically persisted in the realm.
To make a more complete example about what is happening, let us focus on checking/unchecking items. When the user checks an item, the property `Purchased` of a `GroceryItem` is set to true, thanks to the bindings. This means that this item is no more part of `UncheckedItems` (defined as the collection of `GroceryItem` with `Purchased` set to false in the query (*1*)), and thus it will disappear from the top list. Now the item will be part of `CheckedItems` (defined as the collection of `GroceryItem` with `Purchased` set to true in the query (*1*)), and as such it will appear in the bottom list. Given that the number of elements in `CheckedItems` has changed, the text in `Label` (*E*) will be also updated.
Coming back to the view model, we then have the `AddItem`, `DeleteItem`, and `Delete` methods (*2*) that are invoked, respectively, when an item is added, when an item is removed, and when the whole list needs to be removed. The methods are pretty straightforward, and at their core just execute a write transaction modifying or deleting `ShoppingList`.
## Editing and synchronization
In this section, we are going to discuss how shopping list editing is done in the app, and how to synchronize it back to the service.
In a mobile application, there are generally two different ways of approaching *editing*:
* *Save button*. The user modifies what they need in the application, and then presses a save button to persist their changes when satisfied.
* *Continuous save*. The changes by the user are continually saved by the application, so there is no need for an explicit save button.
Generally, the second choice is more common in modern applications, and for this reason, it is also the approach that we decided to use in our example.
The main editing in `SharedGroceries` happens in the `ShoppingListPage`, where the user can modify or delete shopping lists. As we discussed before, all the changes that are done by the user are automatically persisted in the realm thanks to the two-way bindings, and so the next step is to synchronize those changes back to the web service. Even though the changes are saved as they happen, we decided to synchronize those to the service only after the user is finished with modifying a certain list, and went away from the `ShoppingListPage`. This allows us to send the whole updated list to the service, instead of a series of individual updates. This is a choice that we made to keep the application simple, but obviously, the requirements could be different in another case.
In order to implement the synchronization mechanism we have discussed, we needed to keep track of which shopping list was being edited at a certain time and which shopping lists have already been edited (and so can be sent to the web service). This is implemented in the following methods from the `DataService` class:
``` csharp
public static void StartEditing(Guid listId)
{
PreferencesManager.SetEditingListId(listId);
}
public static void FinishEditing()
{
var editingListId = PreferencesManager.GetEditingListId();
if (editingListId == null)
{
return;
}
//1
PreferencesManager.RemoveEditingListId();
//2
PreferencesManager.AddReadyForSyncListId(editingListId.Value);
//3
Task.Run(TrySync);
}
public static async Task TrySync()
{
//4
var readyForSyncListsId = PreferencesManager.GetReadyForSyncListsId();
//5
var editingListId = PreferencesManager.GetEditingListId();
foreach (var readyForSyncListId in readyForSyncListsId)
{
//6
if (readyForSyncListId == editingListId) //The list is still being edited
{
continue;
}
//7
var updateSuccessful = await UpdateShoppingList(readyForSyncListId);
if (updateSuccessful)
{
//8
PreferencesManager.RemoveReadyForSyncListId(readyForSyncListId);
}
}
}
```
The method `StartEditing` is called when opening a list in `ShoppingListsCollectionViewModel`:
``` csharp
private async Task OpenList(ShoppingList list)
{
DataService.StartEditing(list.Id);
await NavigationService.NavigateTo(new ShoppingListViewModel(list));
}
```
This method persists to disk the `Id` of the list that is being currently edited.
The method `FinishEditing` is called in `OnAppearing` in `ShoppingListsCollectionViewModel`:
``` csharp
internal override async void OnAppearing()
{
base.OnAppearing();
if (!loaded)
{
....
await DataService.TrySync();
....
}
else
{
DataService.FinishEditing();
}
}
}
```
This method is called when `ShoppingListsCollectionPage` appears on screen, and so the user possibly went back from the `ShoppingListsPage` after finishing editing. This method removes the identifier of the shopping list that is currently being edited (if it exists)(*1*), and adds it to the collection of identifiers for lists that are ready to be synced (*2*). Finally, it calls the method `TrySync` (*3*) in another thread.
Finally, the method `TrySync` is called both in `DataService.FinishEditing` and in `ShoppingListsCollectionViewModel.OnAppearing`, as we have seen before. This method takes care of synchronizing all the local changes back to the web service:
* It first retrieves the ids of the lists that are ready to be synced (*4*), and then the id of the (eventual) list being edited at the moment (*5*).
* Then, for each of the identifiers of the lists ready to be synced (`readyForSyncListsId`), if the list is being edited right now (*6*), it just skips this iteration of the loop. Otherwise, it updates the shopping list on the service (*7*).
* Finally, if the update was successful, it removes the identifier from the collection of lists that have been edited (*8*).
This method is called also in `OnAppearing` of `ShoppingListsCollectionViewModel` if this is the first time the corresponding page is loaded. We do so as we need to be sure to synchronize data back to the service when the application starts, in case there have been connection issues previously.
Overall, this is probably a very simplified approach to synchronization, as we did not consider several problems that need to be addressed in a production application:
* What happens if the service is not reachable? What is our retry policy?
* How do we resolve conflicts on the service when data is being modified by multiple users?
* How do we respect consistency of the data? How do we make sure that the changes coming from the web service are not overriding the local changes?
Those are only part of the possible issues that can arise when working with synchronization, especially in a collaborative applications like ours.
## Conclusion
In this article, we have shown how Realm can be used effectively in a Xamarin.Forms app, thanks to notifications, bindings, and live objects.
The use of Realm as the source of truth for the application greatly simplified the architecture of SharedGroceries and the automatic bindings, together with notifications, also streamlined the implementation of the MVVM pattern.
Nevertheless, synchronization in a collaborative app such as SharedGroceries is still hard. In our example, we have covered only part of the possible synchronization issues that can arise, but you can already see the amount of effort necessary to ensure that everything stays in sync between the mobile application and the web service.
In a following article, we are going to see how we can use Realm Sync to greatly simplify the architecture of the application and resolve our synchronization issues. | md | {
"tags": [
"C#",
"Realm",
"Xamarin"
],
"pageDescription": "This article shows how to effectively use Realm in a Xamarin.Forms app using recommended patterns. ",
"contentType": "Article"
} | How to Use Realm Effectively in a Xamarin.Forms App | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/map-terms-concepts-sql-mongodb | created | # Mapping Terms and Concepts from SQL to MongoDB
Perhaps, like me, you grew up on SQL databases. You can skillfully
normalize a database, and, after years of working with tables, you think
in rows and columns as well.
But now you've decided to dip your toe into the wonderful world of NoSQL
databases, and you're exploring MongoDB. Perhaps you're wondering what
you need to do differently. Can you just translate your rows and columns
into fields and values and call it a day? Do you really need to change
the way you think about storing your data?
We'll answer those questions and more in this three-part article series.
Below is a summary of what we'll cover today:
- Meet Ron
- Relational Database and Non-Relational Databases
- The Document Model
- Example Documents
- Mapping Terms and Concepts from SQL to MongoDB
- Wrap Up
>
>
>This article is based on a presentation I gave at MongoDB World and
>MongoDB.local Houston entitled "From SQL to NoSQL: Changing Your
>Mindset."
>
>If you prefer videos over articles, check out the
>recording. Slides are available
>here.
>
>
## Meet Ron
I'm a huge fan of the best tv show ever created: Parks and Recreation.
Yes, I wrote that previous sentence as if it were a fact, because it
actually is.
This is Ron. Ron likes strong women, bacon, and staying off the grid.
In season 6, Ron discovers Yelp. Ron thinks Yelp
is amazing, because he loves the idea of reviewing places he's been.
However, Yelp is way too "on the grid" for Ron. He pulls out his beloved
typewriter and starts typing reviews that he intends to send via snail
mail.
Ron writes some amazing reviews. Below is one of my favorites.
Unfortunately, I see three big problems with his plan:
1. Snail mail is way slower than posting the review to Yelp where it
will be instantly available for anyone to read.
2. The business he is reviewing may never open the letter he sends as
they may just assume it's junk mail.
3. No one else will benefit from his review. (These are exactly the
type of reviews I like to find on Amazon!)
### Why am I talking about Ron?
Ok, so why am I talking about Ron in the middle of this article about
moving from SQL to MongoDB?
Ron saw the value of Yelp and was inspired by the new technology.
However, he brought his old-school ways with him and did not realize the
full value of the technology.
This is similar to what we commonly see as people move from a SQL
database to a NoSQL database such as MongoDB. They love the idea of
MongoDB, and they are inspired by the power of the flexible document
data model. However, they frequently bring with them their SQL mindsets
and don't realize the full value of MongoDB. In fact, when people don't
change the way they think about modeling their data, they struggle and
sometimes fail.
Don't be like Ron. (At least in this case, because, in most cases, Ron
is amazing.) Don't be stuck in your SQL ways. Change your mindset and
realize the full value of MongoDB.
Before we jump into how to change your mindset, let's begin by answering
some common questions about non-relational databases and discussing the
basics of how to store data in MongoDB.
## Relational Database and Non-Relational Databases
When I talk with developers, they often ask me questions like, "What use
cases are good for MongoDB?" Developers often have this feeling that
non-relational
databases
(or NoSQL databases) like MongoDB are for specific, niche use cases.
MongoDB is a general-purpose database that can be used in a variety of
use cases across nearly every industry. For more details, see MongoDB
Use Cases, MongoDB
Industries, and the MongoDB Use
Case Guidance
Whitepaper
that includes a summary of when you should evaluate other database
options.
Another common question is, "If my data is relational, why would I use a
non-relational
database?"
MongoDB is considered a non-relational database. However, that doesn't
mean MongoDB doesn't store relationship data well. (I know I just used a
double-negative. Stick with me.) MongoDB stores relationship data in a
different way. In fact, many consider the way MongoDB stores
relationship data to be more intuitive and more reflective of the
real-world relationships that are being modeled.
Let's take a look at how MongoDB stores data.
## The Document Model
Instead of tables, MongoDB stores data in documents. No, Clippy, I'm not
talking about Microsoft Word Documents.
I'm talking about BSON
documents. BSON is a
binary representation of JSON (JavaScript Object Notation)
documents.
Documents will likely feel comfortable to you if you've used any of the
C-family of programming languages such as C, C#, Go, Java, JavaScript,
PHP, or Python.
Documents typically store information about one object as well as any
information related to that object. Related documents are grouped
together in collections. Related collections are grouped together and
stored in a database.
Let's discuss some of the basics of a document. Every document begins
and ends with curly braces.
``` json
{
}
```
Inside of those curly braces, you'll find an unordered set of
field/value pairs that are separated by commas.
``` json
{
field: value,
field: value,
field: value
}
```
The fields are strings that describe the pieces of data being stored.
The values can be any of the BSON data types.
BSON has a variety of data
types including
Double, String, Object, Array, Binary Data, ObjectId, Boolean, Date,
Null, Regular Expression, JavaScript, JavaScript (with scope), 32-bit
Integer, Timestamp, 64-bit Integer, Decimal128, Min Key, and Max Key.
With all of these types available for you to use, you have the power to
model your data as it exists in the real world.
Every document is required to have a field named
\_id. The
value of `_id` must be unique for each document in a collection, is
immutable, and can be of any type other than an array.
## Example Documents
Ok, that's enough definitions. Let's take a look at a real example, and
compare and contrast how we would model the data in SQL vs MongoDB.
### Storing Leslie's Information
Let's say we need to store information about a user named Leslie. We'll
store her contact information including her first name, last name, cell
phone number, and city. We'll also store some extra information about
her including her location, hobbies, and job history.
#### Storing Contact Information
Let's begin with Leslie's contact information. When using SQL, we'll
create a table named `Users`. We can create columns for each piece of
contact information we need to store: first name, last name, cell phone
number, and city. To ensure we have a unique way to identify each row,
we'll include an ID column.
**Users**
| ID | first_name | last_name | cell | city |
|-----|------------|-----------|------------|--------|
| 1 | Leslie | Yepp | 8125552344 | Pawnee |
Now let's store that same information in MongoDB. We can create a new
document for Leslie where we'll add field/value pairs for each piece of
contact information we need to store. We'll use `_id` to uniquely
identify each document. We'll store this document in a collection named
`Users`.
Users
``` json
{
"_id": 1,
"first_name": "Leslie",
"last_name": "Yepp",
"cell": "8125552344",
"city": "Pawnee"
}
```
#### Storing Latitude and Longitude
Now that we've stored Leslie's contact information, let's store the
coordinates of her current location.
When using SQL, we'll need to split the latitude and longitude between
two columns.
**Users**
| ID | first_name | last_name | cell | city | latitude | longitude |
|-----|------------|-----------|------------|--------|-----------|------------|
| 1 | Leslie | Yepp | 8125552344 | Pawnee | 39.170344 | -86.536632 |
MongoDB has an array data type, so we can store the latitude and
longitude together in a single field.
Users
``` json
{
"_id": 1,
"first_name": "Leslie",
"last_name": "Yepp",
"cell": "8125552344",
"city": "Pawnee",
"location": -86.536632, 39.170344 ]
}
```
Bonus Tip: MongoDB has a few different built-in ways to visualize
location data including the [schema analyzer in MongoDB
Compass
and the Geospatial Charts in MongoDB
Charts.
I generated the map below with just a few clicks in MongoDB Charts.
#### Storing Lists of Information
We're successfully storing Leslie's contact information and current
location. Now let's store her hobbies.
When using SQL, we could choose to add more columns to the Users table.
However, since a single user could have many hobbies (meaning we need to
represent a one-to-many relationship), we're more likely to create a
separate table just for hobbies. Each row in the table will contain
information about one hobby for one user. When we need to retrieve
Leslie's hobbies, we'll join the `Users` table and our new `Hobbies`
table.
**Hobbies**
| ID | user_id | hobby |
|-----|---------|----------------|
| 10 | 1 | scrapbooking |
| 11 | 1 | eating waffles |
| 12 | 1 | working |
Since MongoDB supports arrays, we can simply add a new field named
"hobbies" to our existing document. The array can contain as many or as
few hobbies as we need (assuming we don't exceed the 16 megabyte
document size
limit).
When we need to retrieve Leslie's hobbies, we don't need to do an
expensive join to bring the data together; we can simply retrieve her
document in the `Users` collection.
Users
``` json
{
"_id": 1,
"first_name": "Leslie",
"last_name": "Yepp",
"cell": "8125552344",
"city": "Pawnee",
"location": -86.536632, 39.170344 ],
"hobbies": ["scrapbooking", "eating waffles", "working"]
}
```
##### Storing Groups of Related Information
Let's say we also need to store Leslie's job history.
Just as we did with hobbies, we're likely to create a separate table
just for job history information. Each row in the table will contain
information about one job for one user.
**JobHistory**
| ID | user_id | job_title | year_started |
|-----|---------|----------------------------------------------------|--------------|
| 20 | 1 | "Deputy Director" | 2004 |
| 21 | 1 | "City Councillor" | 2012 |
| 22 | 1 | "Director, National Parks Service, Midwest Branch" | 2014 |
So far in this article, we've used arrays in MongoDB to store
geolocation data and a list of Strings. Arrays can contain values of any
type, including objects. Let's create a document for each job Leslie has
held and store those documents in an array.
Users
``` json
{
"_id": 1,
"first_name": "Leslie",
"last_name": "Yepp",
"cell": "8125552344",
"city": "Pawnee",
"location": [ -86.536632, 39.170344 ],
"hobbies": ["scrapbooking", "eating waffles", "working"],
"jobHistory": [
{
"title": "Deputy Director",
"yearStarted": 2004
},
{
"title": "City Councillor",
"yearStarted": 2012
},
{
"title": "Director, National Parks Service, Midwest Branch",
"yearStarted": 2014
}
]
}
```
### Storing Ron's Information
Now that we've decided how we'll store information about our users in
both tables and documents, let's store information about Ron. Ron will
have almost all of the same information as Leslie. However, Ron does his
best to stay off the grid, so he will not be storing his location in the
system.
#### Skipping Location Data in SQL
Let's begin by examining how we would store Ron's information in the
same tables that we used for Leslie's. When using SQL, we are required
to input a value for every cell in the table. We will represent Ron's
lack of location data with `NULL`. The problem with using `NULL` is that
it's unclear whether the data does not exist or if the data is unknown,
so many people discourage the use of `NULL`.
**Users**
| ID | first_name | last_name | cell | city | latitude | longitude |
|-----|------------|--------------|------------|--------|-----------|------------|
| 1 | Leslie | Yepp | 8125552344 | Pawnee | 39.170344 | -86.536632 |
| 2 | Ron | Swandaughter | 8125559347 | Pawnee | NULL | NULL |
**Hobbies**
| ID | user_id | hobby |
|-----|---------|----------------|
| 10 | 1 | scrapbooking |
| 11 | 1 | eating waffles |
| 12 | 1 | working |
| 13 | 2 | woodworking |
| 14 | 2 | fishing |
**JobHistory**
| ID | user_id | job_title | year_started |
|-----|---------|----------------------------------------------------|--------------|
| 20 | 1 | "Deputy Director" | 2004 |
| 21 | 1 | "City Councillor" | 2012 |
| 22 | 1 | "Director, National Parks Service, Midwest Branch" | 2014 |
| 23 | 2 | "Director" | 2002 |
| 24 | 2 | "CEO, Kinda Good Building Company" | 2014 |
| 25 | 2 | "Superintendent, Pawnee National Park" | 2018 |
#### Skipping Location Data in MongoDB
In MongoDB, we have the option of representing Ron's lack of location
data in two ways: we can omit the `location` field from the document or
we can set `location` to `null`. Best practices suggest that we omit the
`location` field to save space. You can choose if you want omitted
fields and fields set to `null` to represent different things in your
applications.
Users
``` json
{
"_id": 2,
"first_name": "Ron",
"last_name": "Swandaughter",
"cell": "8125559347",
"city": "Pawnee",
"hobbies": ["woodworking", "fishing"],
"jobHistory": [
{
"title": "Director",
"yearStarted": 2002
},
{
"title": "CEO, Kinda Good Building Company",
"yearStarted": 2014
},
{
"title": "Superintendent, Pawnee National Park",
"yearStarted": 2018
}
]
}
```
### Storing Lauren's Information
Let's say we are feeling pretty good about our data models and decide to
launch our apps using them.
Then we discover we need to store information about a new user: Lauren
Burhug. She's a fourth grade student who Ron teaches about government.
We need to store a lot of the same information about Lauren as we did
with Leslie and Ron: her first name, last name, city, and hobbies.
However, Lauren doesn't have a cell phone, location data, or job
history. We also discover that we need to store a new piece of
information: her school.
#### Storing New Information in SQL
Let's begin by storing Lauren's information in the SQL tables as they
already exist.
**Users**
| ID | first_name | last_name | cell | city | latitude | longitude |
|-----|------------|--------------|------------|--------|-----------|------------|
| 1 | Leslie | Yepp | 8125552344 | Pawnee | 39.170344 | -86.536632 |
| 2 | Ron | Swandaughter | 8125559347 | Pawnee | NULL | NULL |
| 3 | Lauren | Burhug | NULL | Pawnee | NULL | NULL |
**Hobbies**
| ID | user_id | hobby |
|-----|---------|----------------|
| 10 | 1 | scrapbooking |
| 11 | 1 | eating waffles |
| 12 | 1 | working |
| 13 | 2 | woodworking |
| 14 | 2 | fishing |
| 15 | 3 | soccer |
We have two options for storing information about Lauren's school. We
can choose to add a column to the existing Users table, or we can create
a new table. Let's say we choose to add a column named "school" to the
Users table. Depending on our access rights to the database, we may need
to talk to the DBA and convince them to add the field. Most likely, the
database will need to be taken down, the "school" column will need to be
added, NULL values will be stored in every row in the Users table where
a user does not have a school, and the database will need to be brought
back up.
#### Storing New Information in MongoDB
Let's examine how we can store Lauren's information in MongoDB.
Users
``` json
{
"_id": 3,
"first_name": "Lauren",
"last_name": "Burhug",
"city": "Pawnee",
"hobbies": ["soccer"],
"school": "Pawnee Elementary"
}
```
As you can see above, we've added a new field named "school" to Lauren's
document. We do not need to make any modifications to Leslie's document
or Ron's document when we add the new "school" field to Lauren's
document. MongoDB has a flexible schema, so every document in a
collection does not need to have the same fields.
For those of you with years of experience using SQL databases, you might
be starting to panic at the idea of a flexible schema. (I know I started
to panic a little when I was introduced to the idea.)
Don't panic! This flexibility can be hugely valuable as your
application's requirements evolve and change.
MongoDB provides [schema
validation so
you can lock down your schema as much or as little as you'd like when
you're ready.
## Mapping Terms and Concepts from SQL to MongoDB
Now that we've compared how you model data in SQL and MongoDB, let's be a bit more explicit with the terminology. Let's map terms and concepts from SQL to MongoDB.
**Row ⇒ Document**
A row maps roughly to a document.
Depending on how you've normalized your data, rows across several tables could map to a single document. In our examples above, we saw that rows for Leslie in the `Users`, `Hobbies`, and `JobHistory` tables mapped to a single document.
**Column ⇒ Field**
A column maps roughly to a field. For example, when we modeled Leslie's data, we had a `first_name` column in the `Users` table and a `first_name` field in a User document.
**Table ⇒ Collection**
A table maps roughly to a collection. Recall that a collection is a group of documents. Continuing with our example above, our ``Users`` table maps to our ``Users`` collection.
**Database ⇒ Database**
The term ``database`` is used fairly similarly in both SQL and MongoDB.
Groups of tables are stored in SQL databases just as groups of
collections are stored in MongoDB databases.
**Index ⇒ Index**
Indexes provide fairly similar functionality in both SQL and MongoDB.
Indexes are data structures that optimize queries. You can think of them
like an index that you'd find in the back of a book; indexes tell the
database where to look for specific pieces of information. Without an
index, all information in a table or collection must be searched.
New MongoDB users often forget how much indexes can impact performance.
If you have a query that is taking a long time to run, be sure you have
an index to support it. For example, if we know we will be commonly
searching for users by first or last name, we should add a text index on
the first and last name fields.
Remember: indexes slow down write performance but speed up read
performance. For more information on indexes including the types of
indexes that MongoDB supports, see the MongoDB
Manual.
**View ⇒ View**
Views are fairly similar in both SQL and MongoDB. In MongoDB, a view is
defined by an aggregation pipeline. The results of the view are not
stored—they are generated every time the view is queried.
To learn more about views, see the MongoDB
Manual.
MongoDB added support for On-Demand Materialized Views in version 4.2.
To learn more, see the MongoDB
Manual.
**Join ⇒ Embedding**
When you use SQL databases, joins are fairly common. You normalize your
data to prevent data duplication, and the result is that you commonly
need to join information from multiple tables in order to perform a
single operation in your application
In MongoDB, we encourage you to model your data differently. Our rule of
thumb is *Data that is accessed together should be stored together*. If
you'll be frequently creating, reading, updating, or deleting a chunk of
data together, you should probably be storing it together in a document
rather than breaking it apart across several documents.
You can use embedding to model data that you may have broken out into separate tables when using SQL. When we modeled Leslie's data for MongoDB earlier, we saw that we embedded her job history in her User document instead of creating a separate ``JobHistory`` document.
For more information, see the MongoDB Manual's pages on modeling one-to-one relationships with embedding and modeling one-to-many relationships with embedding.
**Join ⇒ Database References**
As we discussed in the previous section, embedding is a common solution
for modeling data in MongoDB that you may have split across one or more
tables in a SQL database.
However, sometimes embedding does not make sense. Let's say we wanted to
store information about our Users' employers like their names,
addresses, and phone numbers. The number of Users that could be
associated with an employer is unbounded. If we were to embed
information about an employer in a ``User`` document, the employer data
could be replicated hundreds or perhaps thousands of times. Instead, we
can create a new ``Employers`` collection and create a database
reference between ``User`` documents and ``Employer`` documents.
For more information on modeling one-to-many relationships with
database references, see the MongoDB
Manual.
**Left Outer Join ⇒ $lookup (Aggregation Pipeline)**
When you need to pull all of the information from one table and join it
with any matching information in a second table, you can use a left
outer join in SQL.
MongoDB has a stage similar to a left outer join that you can use with
the aggregation framework.
For those not familiar with the aggregation framework, it allows you to
analyze your data in real-time. Using the framework, you can create an
aggregation pipeline that consists of one or more stages. Each stage
transforms the documents and passes the output to the next stage.
$lookup is an aggregation framework stage that allows you to perform a
left outer join to an unsharded collection in the same database.
For more information, see the MongoDB Manual's pages on the aggregation
framework and $lookup.
MongoDB University has a fantastic free course on the aggregation
pipeline that will walk you in detail through using ``$lookup``: M121:
The MongoDB Aggregation Framework.
*Recursive Common Table Expressions ⇒ $graphLookup (Aggregation Pipeline)**
When you need to query hierarchical data like a company's organization
chart in SQL, we can use recursive common table expressions.
MongoDB provides an aggregation framework stage that is similar to
recursive common table expressions: ``$graphLookup``. ``$graphLookup``
performs a recursive search on a collection.
For more information, see the MongoDB Manual's page on $graphLookup and MongoDB University's free course on the aggregation
framework.
**Multi-Record ACID Transaction ⇒ Multi-Document ACID Transaction**
Finally, let's talk about ACID transactions. Transactions group database operations together so they
all succeed or none succeed. In SQL, we call these multi-record ACID
transactions. In MongoDB, we call these multi-document ACID
transactions.
For more information, see the MongoDB Manual.
## Wrap Up
We've just covered a lot of concepts and terminology. The three term
mappings I recommend you internalize as you get started using MongoDB
are:
* Rows map to documents.
* Columns map to fields.
* Tables map to collections.
I created the following diagram you can use as a reference in the future
as you begin your journey using MongoDB.
Be on the lookout for the next post in this series where we'll discuss
the top four reasons you should use MongoDB.
| md | {
"tags": [
"MongoDB",
"SQL"
],
"pageDescription": "Learn how SQL terms and concepts map to MongoDB.",
"contentType": "Article"
} | Mapping Terms and Concepts from SQL to MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/multi-modal-image-vector-search | created | # Build an Image Search Engine With Python & MongoDB
# Building an Image Search Engine With Python & MongoDB
I can still remember when search started to work in Google Photos — the platform where I store all of the photos I take on my cellphone. It seemed magical to me that some kind of machine learning technique could allow me to describe an image in my vast collection of photos and have the platform return that image to me, along with any similar images.
One of the techniques used for this is image classification, where a neural network is used to identify objects and even people in a scene, and the image is tagged with this data. Another technique — which is, if anything, more powerful — is the ability to generate a vector embedding for the image using an embedding model that works with both text and images.
Using a multi-modal embedding model like this allows you to generate a vector that can be stored and efficiently indexed in MongoDB Atlas, and then when you wish to retrieve an image, the same embedding model can be used to generate a vector that is then used to search for images that are similar to the description. It's almost like magic.
## Multi-modal embedding models
A multi-modal embedding model is a machine learning model that encodes information from various data types, like text and images, into a common vector space. It helps link different types of data for tasks such as text-to-image matching or translating between modalities.
The benefit of this is that text and images can be indexed in the same way, allowing images to be searched for by providing either text or another image. You could even search for an item of text with an image, but I can't think of a reason you'd want to do that. The downside of multi-modal models is that they are very complex to produce and thus aren't quite as "clever" as some of the single-mode models that are currently being produced.
In this tutorial, I'll show you how to use the clip-ViT-L-14 model, which encodes both text and images into the same vector space. Because we're using Python, I'll install the model directly into my Python environment to run locally. In production, you probably wouldn't want to have your embedding model running directly inside your web application because it too tightly couples your model, which requires a powerful GPU, to the rest of your application, which will usually be mostly IO-bound. In that case, you can host an appropriate model on Hugging Face or a similar platform.
### Describing the search engine
This example search engine is going to be very much a proof of concept. All the code is available in a Jupyter Notebook, and I'm going to store all my images locally on disk. In production, you'd want to use an object storage service like Amazon's S3.
In the same way, in production, you'd either want to host the model using a specialized service or some dedicated setup on the appropriate hardware, whereas I'm going to download and run the model locally.
If you've got an older machine, it may take a while to generate the vectors, but I found on a four-year-old Intel MacBook Pro I could generate about 1,000 embeddings in 30 minutes, or my MacBook Air M2 can do the same in about five minutes! Either way, maybe go away and make yourself a cup of coffee when the notebook gets to that step.
The search engine will use the same vector model to encode queries (which are text) into the same vector space that was used to encode image data, which means that a phrase describing an image should appear in a similar location to the image’s location in the vector space. This is the magic of multi-modal vector models!
## Getting ready to run the notebook
All of the code described in this tutorial is hosted on GitHub.
The first thing you'll want to do is create a virtual environment using your favorite technique. I tend to use venv, which comes with Python.
Once you've done that, install dependencies with:
```shell
pip install -r requirements.txt
```
Next, you'll need to set an environment variable, `MONGODB_URI`, containing the connection string for your MongoDB cluster.
```python
# Set the value below to your cluster:
export MONGODB_URI="mongodb+srv://image_search_demo:my_password_not_yours@sandbox.abcde.mongodb.net/image_search_demo?retryWrites=true&w=majority"
```
One more thing you'll need is an "images" directory, containing some images to index! I downloaded Kaggle's ImageNet 1000 (mini) dataset, which contains lots of images at around 4GB, but you can use a different dataset if you prefer. The notebook searches the "images" directory recursively, so you don't need to have everything at the top level.
Then, you can fire up the notebook with:
```shell
jupyter notebook "Image Search.ipynb"
```
## Understanding the code
If you've set up the notebook as described above, you should be able to execute it and follow the explanations in the notebook. In this tutorial, I'm going to highlight the most important code, but I'm not going to reproduce it all here, as I worked hard to make the notebook understandable on its own.
## Setting up the collection
First, let's configure a collection with an appropriate vector search index. In Atlas, if you connect to a cluster, you can configure vector search indexes in the Atlas Search tab, but I prefer to configure indexes in my code to keep everything self-contained.
The following code can be run many times but will only create the collection and associated search index on the first run. This is helpful if you want to run the notebook several times!
```python
client = MongoClient(MONGODB_URI)
db = client.get_database(DATABASE_NAME)
# Ensure the collection exists, because otherwise you can't add a search index to it.
try:
db.create_collection(IMAGE_COLLECTION_NAME)
except CollectionInvalid:
# This is raised when the collection already exists.
print("Images collection already exists")
# Add a search index (if it doesn't already exist):
collection = db.get_collection(IMAGE_COLLECTION_NAME)
if len(list(collection.list_search_indexes(name="default"))) == 0:
print("Creating search index...")
collection.create_search_index(
SearchIndexModel(
{
"mappings": {
"dynamic": True,
"fields": {
"embedding": {
"dimensions": 768,
"similarity": "cosine",
"type": "knnVector",
}
},
}
},
name="default",
)
)
print("Done.")
else:
print("Vector search index already exists")
```
The most important part of the code above is the configuration being passed to `create_search_index`:
```python
{
"mappings": {
"dynamic": True,
"fields": {
"embedding": {
"dimensions": 768,
"similarity": "cosine",
"type": "knnVector",
}
},
}
}
```
This specifies that the index will index all fields in the document (because "dynamic" is set to "true") and that the "embedding" field should be indexed as a vector embedding, using cosine similarity. Currently, "knnVector" is the only kind supported by Atlas. The dimension of the vector is set to 768 because that is the number of vector dimensions used by the CLIP model.
## Loading the CLIP model
The following line of code may not look like much, but the first time you execute it, it will download the clip-ViT-L-14 model, which is around 2GB:
```python
# Load CLIP model.
# This may print out warnings, which can be ignored.
model = SentenceTransformer("clip-ViT-L-14")
```
## Generating and storing a vector embedding
Given a path to an image file, an embedding for that image can be generated with the following code:
```python
emb = model.encode(Image.open(path))
```
In this line of code, `model` is the SentenceTransformer I created above, and `Image` comes from the Pillow library and is used to load the image data.
With the embedding vector, a new document can be created with the code below:
```python
collection.insert_one(
{
"_id": re.sub("images/", "", path),
"embedding": emb.tolist(),
}
)
```
I'm only storing the path to the image (as a unique identifier) and the embedding vector. In a real-world application, I'd store any image metadata my application required and probably a URL to an S3 object containing the image data itself.
**Note:** Remember that vector queries can be combined with any other query technique you'd normally use in MongoDB! That's the huge advantage you get using Atlas Vector Search — it's part of MongoDB Atlas, so you can query and transform your data any way you want and even combine it with the power of Atlas Search for free text queries.
The Jupyter Notebook loads images in a loop — by default, it loads 10 images — but that's not nearly enough to see the benefits of an image search engine, so you'll probably want to change `NUMBER_OF_IMAGES_TO_LOAD` to 1000 and run the image load code block again.
## Searching for images
Once you've indexed a good number of images, it's time to test how well it works. I've defined two functions that can be used for this. The first function, `display_images`, takes a list of documents and displays the associated images in a grid. I'm not including the code here because it's a utility function.
The second function, `image_search`, takes a text phrase, encodes it as a vector embedding, and then uses MongoDB's `$vectorSearch` aggregation stage to look up images that are closest to that vector location, limiting the result to the nine closest documents:
```python
def image_search(search_phrase):
"""
Use MongoDB Vector Search to search for a matching image.
The search_phrase is first converted to a vector embedding using
the model loaded earlier in the Jupyter notebook. The vector is then used
to search MongoDB for matching images.
"""
emb = model.encode(search_phrase)
cursor = collection.aggregate(
{
"$vectorSearch": {
"index": "default",
"path": "embedding",
"queryVector": emb.tolist(),
"numCandidates": 100,
"limit": 9,
}
},
{"$project": {"_id": 1, "score": {"$meta": "vectorSearchScore"}}},
]
)
return list(cursor)
```
The `$project` stage adds a "score" field that shows how similar each document was to the original query vector. 1.0 means "exactly the same," whereas 0.0 would mean that the returned image was totally dissimilar.
With the display_images function and the image_search function, I can search for images of "sharks in the water":
```python
display_images(image_search("sharks in the water"))
```
On my laptop, I get the following grid of nine images, which is pretty good!
![A screenshot, showing a grid containing 9 photos of sharks][1]
When I first tried the above search out, I didn't have enough images loaded, so the query above included a photo of a corgi standing on gray tiles. That wasn't a particularly close match! After I loaded some more images to fix the results of the shark query, I could still find the corgi image by searching for "corgi on snow" — it's the second image below. Notice that none of the images exactly match the query, but a couple are definitely corgis, and several are standing in the snow.
```python
display_images(image_search("corgi in the snow"))
```
![A grid of photos. Most photos contain either a dog or snow, or both. One of the dogs is definitely a corgi.][2]
One of the things I really love about vector search is that it's "semantic" so I can search by something quite nebulous, like "childhood."
```
display_images(image_search("childhood"))
```
![A grid of photographs of children or toys or things like colorful erasers.][3]
My favorite result was when I searched for "ennui" (a feeling of listlessness and dissatisfaction arising from a lack of occupation or excitement) which returned photos of bored animals (and a teenager)!
```
display_images(image_search("ennui"))
```
![Photographs of animals looking bored and slightly sad, except for one photo which contains a young man looking bored and slightly sad.][4]
## Next steps
I hope you found this tutorial as fun to read as I did to write!
If you wanted to run this model in production, you would probably want to use a hosting service like [Hugging Face, but I really like the ability to install and try out a model on my laptop with a single line of code. Once the embedding generation, which is processor-intensive and thus a blocking task, is delegated to an API call, it would be easier to build a FastAPI wrapper around the functionality in this code. Then, you could build a powerful web interface around it and deploy your own customized image search engine.
This example also doesn't demonstrate much of MongoDB's query capabilities. The power of vector search with MongoDB Atlas is the ability to combine it with all the power of MongoDB's aggregation framework to query and aggregate your data. If I have some time, I may extend this example to filter by criteria like the date of each photo and maybe allow photos to be tagged manually, or to be automatically grouped into albums.
## Further reading
- MongoDB Atlas Vector Search documentation
- $vectorSearch Aggregation Stage
- What are Multi-Modal Models? from Towards Data Science
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt09221cf1894adc69/65ba2289c600052b89d5b78e/image3.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt47aea2f5cb468ee2/65ba22b1c600057f4ed5b793/image4.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt04170aa66faebd34/65ba23355cdaec53863b9467/image1.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd06c1f2848a13c6f/65ba22f05f12ed09ffe2282c/image2.png | md | {
"tags": [
"Atlas",
"Python",
"Jupyter"
],
"pageDescription": "Build a search engine for photographs with MongoDB Atlas Vector Search and a multi-modal embedding model.",
"contentType": "Tutorial"
} | Build an Image Search Engine With Python & MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-multi-doc-acid-transactions | created | # Java - MongoDB Multi-Document ACID Transactions
## Introduction
Introduced in June 2018 with MongoDB 4.0, multi-document ACID transactions are now supported.
But wait... Does that mean MongoDB did not support transactions before that?
No, MongoDB has consistently supported transactions, initially in the form of single-document transactions.
MongoDB 4.0 extends these transactional guarantees across multiple documents, multiple statements, multiple collections,
and multiple databases. What good would a database be without any form of transactional data integrity guarantee?
Before delving into the details, you can access the code and experiment with multi-document ACID
transactions.
``` bash
git clone git@github.com:mongodb-developer/java-quick-start.git
```
## Quick start
### Last update: February 28th, 2024
- Update to Java 21
- Update Java Driver to 5.0.0
- Update `logback-classic` to 1.2.13
### Requirements
- Java 21
- Maven 3.8.7
- Docker (optional)
### Step 1: start MongoDB
Get started with MongoDB Atlas and get a free cluster.
Or you can start an ephemeral single node replica set using Docker for testing quickly:
```bash
docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:7.0.5 --replSet=RS && sleep 3 && docker exec mongo mongosh --quiet --eval "rs.initiate();"
```
### Step 2: start Java
This demo contains two main programs: `ChangeStreams.java` and `Transactions.java`.
* The `ChangeSteams` class enables you to receive notifications of any data changes within the two collections used in
this tutorial.
* The `Transactions` class is the demo itself.
You need two shells to run them.
First shell:
```
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.transactions.ChangeStreams" -Dmongodb.uri="mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority"
```
Second shell:
```
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.transactions.Transactions" -Dmongodb.uri="mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority"
```
> Note: Always execute the `ChangeStreams` program first because it creates the `product` collection with the
> required JSON Schema.
Let’s compare our existing single-document transactions with MongoDB 4.0’s ACID-compliant multi-document transactions
and see how we can leverage this new feature with Java.
## Prior to MongoDB 4.0
Even in MongoDB 3.6 and earlier, every write operation is represented as a **transaction scoped to the level of an
individual document** in the storage layer. Because the document model brings together related data that would otherwise
be modeled across separate parent-child tables in a tabular schema, MongoDB’s atomic single-document operations provide
transaction semantics that meet the data integrity needs of the majority of applications.
Every typical write operation modifying multiple documents actually happens in several independent transactions: one for
each document.
Let’s take an example with a very simple stock management application.
First of all, I need a MongoDB replica set, so please follow the
instructions given above to start MongoDB.
Now, let’s insert the following documents into a `product` collection:
```js
db.product.insertMany(
{ "_id" : "beer", "price" : NumberDecimal("3.75"), "stock" : NumberInt(5) },
{ "_id" : "wine", "price" : NumberDecimal("7.5"), "stock" : NumberInt(3) }
])
```
Let’s imagine there is a sale on, and we want to offer our customers a 20% discount on all our products.
But before applying this discount, we want to monitor when these operations are happening in MongoDB with [Change
Streams.
Execute the following in a MongoDB shell:
```js
cursor = db.product.watch({$match: {operationType: "update"}}]);
while (!cursor.isClosed()) {
let next = cursor.tryNext()
while (next !== null) {
printjson(next);
next = cursor.tryNext()
}
}
```
Keep this shell on the side, open another MongoDB shell, and apply the discount:
```js
RS [direct: primary] test> db.product.updateMany({}, {$mul: {price:0.8}})
{
acknowledged: true,
insertedId: null,
matchedCount: 2,
modifiedCount: 2,
upsertedCount: 0
}
RS [direct: primary] test> db.product.find().pretty()
[
{ _id: 'beer', price: Decimal128("3.00000000000000000"), stock: 5 },
{ _id: 'wine', price: Decimal128("6.0000000000000000"), stock: 3 }
]
```
As you can see, both documents were updated with a single command line but not in a single transaction.
Here is what we can see in the change stream shell:
```js
{
_id: {
_data: '8265580539000000012B042C0100296E5A1004A7F55A5B35BD4C7DB2CD56C6CFEA9C49463C6F7065726174696F6E54797065003C7570646174650046646F63756D656E744B657900463C5F6964003C6265657200000004'
},
operationType: 'update',
clusterTime: Timestamp({ t: 1700267321, i: 1 }),
wallTime: ISODate("2023-11-18T00:28:41.601Z"),
ns: {
db: 'test',
coll: 'product'
},
documentKey: {
_id: 'beer'
},
updateDescription: {
updatedFields: {
price: Decimal128("3.00000000000000000")
},
removedFields: [],
truncatedArrays: []
}
}
{
_id: {
_data: '8265580539000000022B042C0100296E5A1004A7F55A5B35BD4C7DB2CD56C6CFEA9C49463C6F7065726174696F6E54797065003C7570646174650046646F63756D656E744B657900463C5F6964003C77696E6500000004'
},
operationType: 'update',
clusterTime: Timestamp({ t: 1700267321, i: 2 }),
wallTime: ISODate("2023-11-18T00:28:41.601Z"),
ns: {
db: 'test',
coll: 'product'
},
documentKey: {
_id: 'wine'
},
updateDescription: {
updatedFields: {
price: Decimal128("6.0000000000000000")
},
removedFields: [],
truncatedArrays: []
}
}
```
As you can see, the cluster times (see the `clusterTime` key) of the two operations are different: The operations
occurred during the same second but the counter of the timestamp has been incremented by one.
Thus, here each document is updated one at a time, and even if this happens really fast, someone else could read the
documents while the update is running and see only one of the two products with the discount.
Most of the time, this is something you can tolerate in your MongoDB database because, as much as possible, we try to
embed tightly linked (or related) data in the same document.
Consequently, two updates on the same document occur within a single transaction:
```js
RS [direct: primary] test> db.product.updateOne({_id: "wine"},{$inc: {stock:1}, $set: {description : "It's the best wine on Earth"}})
{
acknowledged: true,
insertedId: null,
matchedCount: 1,
modifiedCount: 1,
upsertedCount: 0
}
RS [direct: primary] test> db.product.findOne({_id: "wine"})
{
_id: 'wine',
price: Decimal128("6.0000000000000000"),
stock: 4,
description: 'It's the best wine on Earth'
}
```
However, sometimes, you cannot model all of your related data in a single document, and there are a lot of valid reasons
for choosing not to embed documents.
## MongoDB 4.0 with multi-document ACID transactions
Multi-document [ACID transactions in MongoDB closely resemble what
you may already be familiar with in traditional relational databases.
MongoDB’s transactions are a conversational set of related operations that must atomically commit or fully roll back with
all-or-nothing execution.
Transactions are used to make sure operations are atomic even across multiple collections or databases. Consequently,
with snapshot isolation reads, another user can only observe either all the operations or none of them.
Let’s now add a shopping cart to our example.
For this example, two collections are required because we are dealing with two different business entities: the stock
management and the shopping cart each client can create during shopping. The lifecycles of each document in these
collections are different.
A document in the product collection represents an item I’m selling. This contains the current price of the product and
the current stock. I created a POJO to represent
it: Product.java.
```js
{ "_id" : "beer", "price" : NumberDecimal("3"), "stock" : NumberInt(5) }
```
A shopping cart is created when a client adds their first item in the cart and is removed when the client proceeds to
check out or leaves the website. I created a POJO to represent
it: Cart.java.
```js
{
"_id" : "Alice",
"items" :
{
"price" : NumberDecimal("3"),
"productId" : "beer",
"quantity" : NumberInt(2)
}
]
}
```
The challenge here resides in the fact that I cannot sell more than I possess: If I have five beers to sell, I cannot have
more than five beers distributed across the different client carts.
To ensure that, I have to make sure that the operation creating or updating the client cart is atomic with the stock
update. That’s where the multi-document transaction comes into play.
The transaction must fail in case someone tries to buy something I do not have in my stock. I will add a constraint
on the product stock:
```js
db.product.drop()
db.createCollection("product", {
validator: {
$jsonSchema: {
bsonType: "object",
required: [ "_id", "price", "stock" ],
properties: {
_id: {
bsonType: "string",
description: "must be a string and is required"
},
price: {
bsonType: "decimal",
minimum: 0,
description: "must be a non-negative decimal and is required"
},
stock: {
bsonType: "int",
minimum: 0,
description: "must be a non-negative integer and is required"
}
}
}
}
})
```
> Note that this is already included in the Java code of the `ChangeStreams` class.
To monitor our example, we are going to use MongoDB [Change Streams
that were introduced in MongoDB 3.6.
In ChangeStreams.java,
I am going to monitor the database `test` which contains our two collections. It'll print each
operation with its associated cluster time.
```java
package com.mongodb.quickstart.transactions;
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.CreateCollectionOptions;
import com.mongodb.client.model.ValidationAction;
import com.mongodb.client.model.ValidationOptions;
import org.bson.BsonDocument;
import static com.mongodb.client.model.changestream.FullDocument.UPDATE_LOOKUP;
public class ChangeStreams {
private static final String CART = "cart";
private static final String PRODUCT = "product";
public static void main(String] args) {
ConnectionString connectionString = new ConnectionString(System.getProperty("mongodb.uri"));
MongoClientSettings clientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.build();
try (MongoClient client = MongoClients.create(clientSettings)) {
MongoDatabase db = client.getDatabase("test");
System.out.println("Dropping the '" + db.getName() + "' database.");
db.drop();
System.out.println("Creating the '" + CART + "' collection.");
db.createCollection(CART);
System.out.println("Creating the '" + PRODUCT + "' collection with a JSON Schema.");
db.createCollection(PRODUCT, productJsonSchemaValidator());
System.out.println("Watching the collections in the DB " + db.getName() + "...");
db.watch()
.fullDocument(UPDATE_LOOKUP)
.forEach(doc -> System.out.println(doc.getClusterTime() + " => " + doc.getFullDocument()));
}
}
private static CreateCollectionOptions productJsonSchemaValidator() {
String jsonSchema = """
{
"$jsonSchema": {
"bsonType": "object",
"required": ["_id", "price", "stock"],
"properties": {
"_id": {
"bsonType": "string",
"description": "must be a string and is required"
},
"price": {
"bsonType": "decimal",
"minimum": 0,
"description": "must be a non-negative decimal and is required"
},
"stock": {
"bsonType": "int",
"minimum": 0,
"description": "must be a non-negative integer and is required"
}
}
}
}""";
return new CreateCollectionOptions().validationOptions(
new ValidationOptions().validationAction(ValidationAction.ERROR)
.validator(BsonDocument.parse(jsonSchema)));
}
}
```
In this example, we have five beers to sell.
Alice wants to buy two beers, but we are **not** going to use a multi-document transaction for this. We will
observe in the change streams two operations at two different cluster times:
- One creating the cart
- One updating the stock
Then, Alice adds two more beers to her cart, and we are going to use a transaction this time. The result in the change
stream will be two operations happening at the same cluster time.
Finally, she will try to order two extra beers but the jsonSchema validator will fail the product update (as there is only
one in stock) and result in a
rollback. We will not see anything in the change stream.
Below is the source code
for [Transaction.java:
```java
package com.mongodb.quickstart.transactions;
import com.mongodb.*;
import com.mongodb.client.*;
import com.mongodb.quickstart.transactions.models.Cart;
import com.mongodb.quickstart.transactions.models.Product;
import org.bson.BsonDocument;
import org.bson.codecs.configuration.CodecRegistry;
import org.bson.codecs.pojo.PojoCodecProvider;
import org.bson.conversions.Bson;
import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import static com.mongodb.client.model.Filters.*;
import static com.mongodb.client.model.Updates.inc;
import static org.bson.codecs.configuration.CodecRegistries.fromProviders;
import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;
public class Transactions {
private static final BigDecimal BEER_PRICE = BigDecimal.valueOf(3);
private static final String BEER_ID = "beer";
private static final Bson filterId = eq("_id", BEER_ID);
private static final Bson filterAlice = eq("_id", "Alice");
private static final Bson matchBeer = elemMatch("items", eq("productId", "beer"));
private static final Bson incrementTwoBeers = inc("items.$.quantity", 2);
private static final Bson decrementTwoBeers = inc("stock", -2);
private static MongoCollection cartCollection;
private static MongoCollection productCollection;
public static void main(String] args) {
ConnectionString connectionString = new ConnectionString(System.getProperty("mongodb.uri"));
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);
MongoClientSettings clientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.codecRegistry(codecRegistry)
.build();
try (MongoClient client = MongoClients.create(clientSettings)) {
MongoDatabase db = client.getDatabase("test");
cartCollection = db.getCollection("cart", Cart.class);
productCollection = db.getCollection("product", Product.class);
transactionsDemo(client);
}
}
private static void transactionsDemo(MongoClient client) {
clearCollections();
insertProductBeer();
printDatabaseState();
System.out.println("""
######### NO TRANSACTION #########
Alice wants 2 beers.
We have to create a cart in the 'cart' collection and update the stock in the 'product' collection.
The 2 actions are correlated but can not be executed at the same cluster time.
Any error blocking one operation could result in stock error or a sale of beer that we can't fulfill as we have no stock.
------------------------------------""");
aliceWantsTwoBeers();
sleep();
removingBeersFromStock();
System.out.println("####################################\n");
printDatabaseState();
sleep();
System.out.println("""
######### WITH TRANSACTION #########
Alice wants 2 extra beers.
Now we can update the 2 collections simultaneously.
The 2 operations only happen when the transaction is committed.
------------------------------------""");
aliceWantsTwoExtraBeersInTransactionThenCommitOrRollback(client);
sleep();
System.out.println("""
######### WITH TRANSACTION #########
Alice wants 2 extra beers.
This time we do not have enough beers in stock so the transaction will rollback.
------------------------------------""");
aliceWantsTwoExtraBeersInTransactionThenCommitOrRollback(client);
}
private static void aliceWantsTwoExtraBeersInTransactionThenCommitOrRollback(MongoClient client) {
ClientSession session = client.startSession();
try {
session.startTransaction(TransactionOptions.builder().writeConcern(WriteConcern.MAJORITY).build());
aliceWantsTwoExtraBeers(session);
sleep();
removingBeerFromStock(session);
session.commitTransaction();
} catch (MongoException e) {
session.abortTransaction();
System.out.println("####### ROLLBACK TRANSACTION #######");
} finally {
session.close();
System.out.println("####################################\n");
printDatabaseState();
}
}
private static void removingBeersFromStock() {
System.out.println("Trying to update beer stock : -2 beers.");
try {
productCollection.updateOne(filterId, decrementTwoBeers);
} catch (MongoException e) {
System.out.println("######## MongoException ########");
System.out.println("##### STOCK CANNOT BE NEGATIVE #####");
throw e;
}
}
private static void removingBeerFromStock(ClientSession session) {
System.out.println("Trying to update beer stock : -2 beers.");
try {
productCollection.updateOne(session, filterId, decrementTwoBeers);
} catch (MongoException e) {
System.out.println("######## MongoException ########");
System.out.println("##### STOCK CANNOT BE NEGATIVE #####");
throw e;
}
}
private static void aliceWantsTwoBeers() {
System.out.println("Alice adds 2 beers in her cart.");
cartCollection.insertOne(new Cart("Alice", List.of(new Cart.Item(BEER_ID, 2, BEER_PRICE))));
}
private static void aliceWantsTwoExtraBeers(ClientSession session) {
System.out.println("Updating Alice cart : adding 2 beers.");
cartCollection.updateOne(session, and(filterAlice, matchBeer), incrementTwoBeers);
}
private static void insertProductBeer() {
productCollection.insertOne(new Product(BEER_ID, 5, BEER_PRICE));
}
private static void clearCollections() {
productCollection.deleteMany(new BsonDocument());
cartCollection.deleteMany(new BsonDocument());
}
private static void printDatabaseState() {
System.out.println("Database state:");
printProducts(productCollection.find().into(new ArrayList<>()));
printCarts(cartCollection.find().into(new ArrayList<>()));
System.out.println();
}
private static void printProducts(List products) {
products.forEach(System.out::println);
}
private static void printCarts(List carts) {
if (carts.isEmpty()) {
System.out.println("No carts...");
} else {
carts.forEach(System.out::println);
}
}
private static void sleep() {
System.out.println("Sleeping 1 second...");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
System.err.println("Oops!");
e.printStackTrace();
}
}
}
```
Here is the console of the change stream:
```
Dropping the 'test' database.
Creating the 'cart' collection.
Creating the 'product' collection with a JSON Schema.
Watching the collections in the DB test...
Timestamp{value=7304460075832180737, seconds=1700702141, inc=1} => Document{{_id=beer, price=3, stock=5}}
Timestamp{value=7304460075832180738, seconds=1700702141, inc=2} => Document{{_id=Alice, items=[Document{{price=3, productId=beer, quantity=2}}]}}
Timestamp{value=7304460080127148033, seconds=1700702142, inc=1} => Document{{_id=beer, price=3, stock=3}}
Timestamp{value=7304460088717082625, seconds=1700702144, inc=1} => Document{{_id=Alice, items=[Document{{price=3, productId=beer, quantity=4}}]}}
Timestamp{value=7304460088717082625, seconds=1700702144, inc=1} => Document{{_id=beer, price=3, stock=1}}
```
As you can see here, we only get five operations because the two last operations were never committed to the database,
and therefore, the change stream has nothing to show.
- The first operation is the product collection initialization (create the product document for the beers).
- The second and third operations are the first two beers Alice adds to her cart *without* a multi-doc transaction. Notice
that the two operations do *not* happen at the same cluster time.
- The two last operations are the two additional beers Alice adds to her cart *with* a multi-doc transaction. Notice
that this time the two operations are atomic, and they are happening exactly at the same cluster time.
Here is the console of the transaction Java process that sums up everything I said earlier.
```
Database state:
Product{id='beer', stock=5, price=3}
No carts...
######### NO TRANSACTION #########
Alice wants 2 beers.
We have to create a cart in the 'cart' collection and update the stock in the 'product' collection.
The 2 actions are correlated but can not be executed on the same cluster time.
Any error blocking one operation could result in stock error or a sale of beer that we can't fulfill as we have no stock.
------------------------------------
Alice adds 2 beers in her cart.
Sleeping 1 second...
Trying to update beer stock : -2 beers.
####################################
Database state:
Product{id='beer', stock=3, price=3}
Cart{id='Alice', items=[Item{productId=beer, quantity=2, price=3}]}
Sleeping 1 second...
######### WITH TRANSACTION #########
Alice wants 2 extra beers.
Now we can update the 2 collections simultaneously.
The 2 operations only happen when the transaction is committed.
------------------------------------
Updating Alice cart : adding 2 beers.
Sleeping 1 second...
Trying to update beer stock : -2 beers.
####################################
Database state:
Product{id='beer', stock=1, price=3}
Cart{id='Alice', items=[Item{productId=beer, quantity=4, price=3}]}
Sleeping 1 second...
######### WITH TRANSACTION #########
Alice wants 2 extra beers.
This time we do not have enough beers in stock so the transaction will rollback.
------------------------------------
Updating Alice cart : adding 2 beers.
Sleeping 1 second...
Trying to update beer stock : -2 beers.
######## MongoException ########
##### STOCK CANNOT BE NEGATIVE #####
####### ROLLBACK TRANSACTION #######
####################################
Database state:
Product{id='beer', stock=1, price=3}
Cart{id='Alice', items=[Item{productId=beer, quantity=4, price=3}]}
```
## Next steps
Thanks for taking the time to read my post. I hope you found it useful and interesting.
As a reminder, all the code is
available [on the GitHub repository
for you to experiment.
If you're seeking an easy way to begin with MongoDB, you can achieve that in just five clicks using
our MongoDB Atlas cloud database service.
| md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "In this tutorial you'll learn more about multi-document ACID transaction in MongoDB with Java. You'll understand why they are necessary in some cases and how they work.",
"contentType": "Quickstart"
} | Java - MongoDB Multi-Document ACID Transactions | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/setup-multi-cloud-cluster-mongodb-atlas | created | # Create a Multi-Cloud Cluster with MongoDB Atlas
Multi-cloud clusters on MongoDB Atlas are now generally available! Just as you might distribute your data across various regions, you can now distribute across multiple cloud providers as well. This gives you a lot more freedom and flexibility to run your application anywhere and move across any cloud without changing a single line of code.
Want to use Azure DevOps for continuous integration and continuous deployment but Google Cloud for Vision AI? Possible! Need higher availability in Canada but only have a single region available in your current cloud provider? Add additional nodes from another Canadian region on a different cloud provider! These kinds of scenarios are what multi-cloud was made for!
In this post, I won't be telling you *why* multi-cloud is useful; there are several articles (like this one or that one) and a Twitch stream that do a great job of that already! Rather, in this post, I'd like to:
- Show you how to set up a multi-cloud cluster in MongoDB Atlas.
- Explain what each of the new multi-cloud options mean.
- Acknowledge some new considerations that come with multi-cloud capabilities.
- Answer some common questions surrounding multi-cloud clusters.
Let's get started!
## Requirements
To go through this tutorial, you'll need:
- A MongoDB Cloud account
- To create an M10 cluster or higher (note that this isn't covered by the free tier)
## Quick Jump
- How to Set Up a Multi-Cloud Cluster
- How to Test a Primary Node Failover to a Different Cloud Provider
- Differences between Electable, Read-Only, and Analytics Nodes
- Choosing Your Electable Node Distribution
- Multi-Cloud Considerations
- Multi-Cloud FAQs
## How to Set Up a Multi-Cloud Cluster
1. Log into your MongoDB Cloud account.
2. Select the organization and project you wish to create a multi-cloud cluster in. If you don't have either, first create an organization and project before proceeding.
3. Click "Build a Cluster". (Alternatively, click "Create a New Cluster" toward the top-right of the screen, visible if you have at least one other cluster.)
4. If this is the first cluster in your project, you'll be asked to choose what kind of cluster you'd like to create. Select "Create a cluster" for the "Dedicated Multi-Region Clusters" option.
5. You are brought to the "Create a Multi-Region Cluster" screen. If not already in the ON position, toggle the "Multi-Cloud, Multi-Region & Workload Isolation" option:
6. This will expand several more options for you to configure. These options determine the type and distribution of nodes in your cluster:
>
>
>💡 *What's the difference between "Multi-Region" and "Multi-Cloud" Clusters?*
>
>The introduction of multi-cloud capabilities in Atlas changes how Atlas defines geographies for a cluster. Now, when referencing a *multi-region* cluster, this can be a cluster that is hosted in:
>- more than one region within one cloud provider, or
>- more than one cloud provider. (A cluster that spans more than one cloud provider spans more than one region by design.)
>- multiple regions across multiple cloud providers.
>
>As each cloud provider has its own set of regions, multi-cloud clusters are also multi-region clusters.
>
>
7. Configure your cluster. In this step, you'll choose a combination of Electable, Read-Only, and Analytics nodes that will make up your cluster.
>
>
>💡 *Choosing Nodes for your Multi-Cloud Cluster*
>
>- **Electable nodes**: Additional candidate nodes (via region or cloud provider) and only nodes that can become the primary in case of a failure. Be sure to choose an odd number of total electable nodes (minimum of three); these recommended node distributions are a good place to start.
>- **Read-Only nodes**: Great for local reads in specific areas.
>- **Analytics nodes**: Great for isolating analytical workloads from your main, operational workloads.
>
>Still can't make a decision? Check out the detailed differences between Electable, Read-Only, and Analytics nodes for more information!
>
>
As an example, here's my final configuration (West Coast-based, using a 2-2-1 electable node distribution):
I've set up five electable nodes in regions closest to me, with a GCP Las Vegas region as the highest priority as I'm based in Las Vegas. Since both Azure and AWS offer a California region, the next closest ones available to me, I've chosen them as the next eligible regions. To accommodate my other service areas on the East Coast, I've also configured two read-only nodes: one in Virginia and one in Illinois. Finally, to separate my reporting queries, I've configured a dedicated node as an analytics node. I chose the same GCP Las Vegas region to reduce latency and cost.
8. Choose the remaining options for your cluster:
- Expand the "Cluster Tier" section and select the "M10" tier (or higher, depending on your needs).
- Expand the "Additional Settings" section and select "MongoDB 4.4," which is the latest version as of this time.
- Expand the "Cluster Name" section and choose a cluster name. This name can't be changed after the cluster is created, so choose wisely!
9. With all options set, click the "Create Cluster" button. After a short wait, your multi-cloud cluster will be created! When it's ready, click on your cluster name to see an overview of your nodes. Here's what mine looks like:
As you can see, the GCP Las Vegas region has been set as my preferred region. Likewise, one of the nodes in that region is set as my primary. And as expected, the read-only and analytics nodes are set to the respective regions I've chosen:
Sweet! You've just set up your own multi-cloud cluster. 🎉 To test it out, you can continue onto the next section where you'll manually trigger an election and see your primary node restored to a different cloud provider!
>
>
>🌟 You've just set up a multi-cloud cluster! If you've found this tutorial helpful or just want to share your newfound knowledge, consider sending a Tweet!
>
>
## Testing a Primary Node Failover to a Different Cloud Provider
If you're creating a multi-cloud cluster for higher availability guarantees, you may be wondering how to test that it will actually work if one cloud provider goes down. Atlas offers self-healing clusters, powered by built-in automation tools, to ensure that in the case of a primary node outage, your cluster will still remain online as it elects a new primary node and reboots a new secondary node when possible. To test a primary being moved to a different cloud provider, you can follow these steps to manually trigger an election:
1. From the main "Clusters" overview in Atlas, find the cluster you'd like to test. Select the three dots (...) to open the cluster's additional options, then click "Edit Configuration":
2. You'll be brought to a similar configuration screen as when you created your cluster. Expand the "Cloud Provider & Region" section.
3. Change your highest priority region to one of your lower-priority regions. For example, my current highest priority region is GCP Las Vegas (us-west4). To change it, I'll drag my Azure California (westus) region to the top, making it the new highest priority region:
4. Click the "Review Changes" button. You'll be brought to a summary page where you can double-check the changes you are about to make:
5. If everything looks good, click the "Apply Changes" button.
6. After a short wait to deploy these changes, you'll see that your primary has been set to a node from your newly prioritized region and cloud provider. As you can see for my cluster, my primary is now set to a node in my Azure (westus) region:
💡 In the event of an actual outage, Atlas automatically handles this failover and election process for you! These steps are just here so that you can test a failover manually and visually inspect that your primary node has, indeed, been restored on a different cloud provider.
There you have it! You've created a multi-cloud cluster on MongoDB Atlas and have even tested a manual "failover" to a new cloud provider. You can now grab the connection string from your cluster's Connect wizard and use it with your application.
>
>
>⚡ Make sure you delete your cluster when finished with it to avoid any additional charges you may not want. To delete a cluster, click the three dots (...) on the cluster overview page of the cluster you want to delete, then click Terminate. Similar to GitHub, MongoDB Atlas will ask you to type the name of your cluster to confirm that you want to delete it, including all data that is on the cluster!
>
>
## Differences between Electable, Read-Only, and Analytics Nodes
### Electable Nodes
These nodes fulfill your availability needs by providing additional candidate nodes and/or alternative locations for your primary node. When the primary fails, electable nodes reduce the impact by failing over to an alternative node. And when wider availability is needed for a region, to comply with specific data sovereignty requirements, for example, an electable node from another cloud provider and similar region can help fill in the gap.
💡 When configuring electable nodes in a multi-cloud cluster, keep the following in mind:
- Electable nodes are the *only ones that participate in replica set elections*.
- Any Electable node can become the primary while the majority of nodes in a replica set remain available.
- Spreading your Electable nodes across large distances can lead to longer election times.
As you select which cloud providers and regions will host your electable nodes, also take note of the order you place them in. Atlas prioritizes nodes for primary eligibility based on their order in the Electable nodes table. This means the *first row of the Electable nodes table is set as the highest priority region*. Atlas lets you know this as you'll see the "HIGHEST" badge listed as the region's priority.
If there are multiple nodes configured for this region, they will also rank higher in primary eligibility over any other regions in the table. The remaining regions (other rows in the Electable nodes table) and their corresponding nodes rank in the order that they appear, with the last row being the lowest priority region.
As an example, take this 2-2-1 node configuration:
When Atlas prioritizes nodes for primary eligibility, it does so in this order:
Highest Priority => Nodes 1 & 2 in Azure California (westus) region
Next Priority => Nodes 3 & 4 in GCP Las Vegas (us-west4) region
Lowest Priority => Single node in AWS N. California (us-west-1) region
To change the priority order of your electable nodes, you can grab (click and hold the three vertical lines of the row) the region you'd like to move and drag it to the order you'd prefer.
If you need to change the primary cloud provider for your cluster after its creation, don't worry! You can do so by editing your cluster configuration via the Atlas UI.
### Read-Only Nodes
To optimize local reads in specific areas, use read-only nodes. These nodes have distinct read-preference tags that allow you to direct queries to the regions you specify. So, you could configure a node for each of your serviceable regions, directing your users' queries to the node closest to them. This results in reduced latency for everyone! 🙌
💡 When configuring Read-only nodes in a multi-cloud cluster, keep the following in mind:
- Read-only nodes don't participate in elections.
- Because they don't participate in elections, they don't provide high availability.
- Read-only nodes can't become the primary for their cluster.
To add a read-only node to your cluster, click "+ Add a provider/region," then select the cloud provider, region, and number of nodes you'd like to add. If you want to remove a read-only node from your cluster, click the garbage can icon to the right of each row.
### Analytics Nodes
If you need to run analytical workloads and would rather separate those from your main, operational workloads, use Analytics nodes. These nodes are great for complex or long-running operations, like reporting queries and ETL jobs, that can take up a lot of cluster resources and compete with your other traffic. The benefit of analytics nodes is that you can isolate those queries completely.
Analytics nodes have the same considerations as read-only nodes. They can also be added and removed from your cluster in the same way as the other nodes.
## Choosing Your Electable Node Distribution
Deploying an odd number of electable nodes ensures reliable elections. With this in mind, we require a minimum of three electable nodes to be configured. Depending on your scenario, these nodes can be divided in several different ways. We generally advise one of the following node distribution options:
### **2-2-1**: *Two nodes in the highest-priority cloud region, two nodes in a lower-priority cloud region, one node in a different lower-priority region*
To achieve continuous read **and** write availability across any cloud provider and region outage, a 2-2-1 node distribution is needed. By spreading across multiple cloud providers, you gain higher availability guarantees. However, as 2-2-1 node distributions need to continuously replicate data to five nodes, across different regions and cloud providers, this can be the more costly configuration. If cost is a concern, then the 1-1-1 node distribution can be an effective alternative.
### **1-1-1**: *One node in three different cloud regions*
In this configuration, you'll be able to achieve similar (but not quite exact) read and write availability to the 2-2-1 distribution with three cloud providers. The biggest difference, however, is that when a cloud provider *does* go down, you may encounter higher write latency, especially if your writes have to temporarily shift to a region that's farther away.
## Multi-Cloud Considerations
With multi-cloud capabilities come new considerations to keep in mind. As you start creating more of your own multi-cloud clusters, be aware of the following:
### Election/Replication Lag
The larger the number of regions you have or the longer the physical distances are between your nodes, the **longer your election times/replication** lag will be. You may have already experienced this if you have multi-region clusters, but it can be exacerbated as nodes are potentially spread farther apart with multi-cloud clusters.
### Connection Strings
If you use the standard connection string format, removing an entire region from an existing multi-region cluster **may result in a new connection string**. Instead, **it is strongly recommended** that you use the DNS seedlist format to avoid potential service loss for your applications.
### Host Names
Atlas **does not** guarantee that host names remain consistent with respect to node types during topology changes. For example, in my cluster named "multi-cloud-demo", I had an Analytics node named `multi-cloud-demo-shard-00-05.opbdn.mongodb.net:27017`. When a topology change occurs, such as changing my selected regions or scaling the number of nodes in my cluster, Atlas does not guarantee that the specific host name `multi-cloud-demo-shard-00-05.opbdn.mongodb.net:27017` will still refer to an Analytics node.
### Built-in Custom Write Concerns
Atlas provides built-in custom write concerns for multi-region clusters. These can help improve data consistency by ensuring operations are propagated to a set number of regions before an operation can succeed.
##### Custom Write Concerns for Multi-Region Clusters in MongoDB Atlas
| Write Concern | Tags | Description |
|----------------|-----------------|-------------------------------------------------------------------------------------------------------------|
| `twoRegions` | `{region: 2}` | Write operations must be acknowledged by at least two regions in your cluster |
| `threeRegions` | `{region: 3}` | Write operations must be acknowledged by at least three regions in your cluster |
| `twoProviders` | `{provider: 2}` | Write operations must be acknowledged by at least two regions in your cluster with distinct cloud providers |
## Multi-Cloud FAQs
**Can existing clusters be modified to be multi-cloud clusters?** Yes. All clusters M10 or higher can be changed to a multi-cloud cluster through the cluster configuration settings in Atlas.
**Can I deploy a multi-cloud sharded cluster?** Yes. Both multi-cloud replica sets and multi-cloud sharded clusters are available to deploy on Atlas.
**Do multi-cloud clusters work the same way on all versions, cluster tiers, and clouds?** Yes. Multi-cloud clusters will behave very similarly to single-cloud multi-region clusters, which means it will also be subject to the same constraints.
**What happens to the config servers in a multi-cloud sharded cluster?** Config servers will behave in the same way they do for existing sharded clusters on MongoDB Atlas today. If a cluster has two electable regions, there will be two config servers in the highest priority region and one config server in the next highest region. If a cluster has three or more electable regions, there will be one config server in each of the three highest priority regions.
**Can I use a key management system for encryption at rest with a multi-cloud cluster?** Yes. Whichever KMS you prefer (Azure Key Vault, AWS KMS, or Google Cloud KMS) can be used, though only one KMS can be active at a time. Otherwise, key management for encryption at rest works in the same way as it does for single-cloud clusters.
**Can I pin data to certain cloud providers for compliance requirements?** Yes. With Global Clusters, you can pin data to specific zones or regions to fulfill any data sovereignty requirements you may have.
Have a question that's not answered here? Head over to our MongoDB Community Forums and start a topic! Our community of MongoDB experts and employees are always happy to help!
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn everything you need to know about multi-cloud clusters on MongoDB Atlas.",
"contentType": "Tutorial"
} | Create a Multi-Cloud Cluster with MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/introducing-mongodb-analyzer-dotnet | created | # Introducing the MongoDB Analyzer for .NET
# Introducing the MongoDB Analyzer for .NET
Correct code culprits at compile time!
As C# and .NET developers, we know that it can sometimes be frustrating to work idiomatically with MongoDB queries and aggregations. Without a way to see if your LINQ query or Builder expression corresponds to the MongoDB Query API (formerly known as MQL) during development, you previously had to wait for runtime errors in order to troubleshoot your queries. We knew there had to be a way to work more seamlessly with C# and MongoDB.
That’s why we’ve built the MongoDB Analyzer for .NET! Instead of mentally mapping the idiomatic version of your query in C# to the MongoDB Query API, the MongoDB Analyzer can do it for you - and even provide the generated Query API expression right in your IDE. The MongoDB Analyzer even surfaces useful information and helpful warnings on invalid expressions at compile time, bringing greater visibility to the root causes of bugs. And when used together with the recently released LINQ3 provider (now supported in MongoDB C#/.NET Driver 2.14.0 and higher), you can compose and understand queries in a much more manageable way.
Let’s take a look at how to install and use the new MongoDB Analyzer as a NuGet package. We’ll follow with some code samples so you can see why this is a must-have tool for Visual Studio!
## Install MongoDB Analyzer as a NuGet Package
In Visual Studio, install the `MongoDB.Analyzer` NuGet package:
*Package Manager*
```
Install-Package MongoDB.Analyzer -Version 1.0.0
```
*.NET CLI*
```
dotnet add package MongoDB.Analyzer --version 1.0.0
```
Once installed, it will be added to your project’s Dependencies list, under Analyzers:
After installing and once the analyzer has run, you’ll find all of the diagnostic warnings output to the Error List panel. As you start to inspect your code, you’ll also see that any unsupported expressions will be highlighted.
## Inspecting Information Messages and Warnings
As you write LINQ or Builders expressions, an information tooltip can be accessed by hovering over the three grey dots under your expression:
*Accessing the tooltip for a LINQ expression*
This tooltip displays the corresponding Query API language to the expression you are writing and updates in real-time! With the translated query at your tooltips, you can confirm the query being generated (and executed!) is the one you expect.
This is a far more efficient process of composing and testing queries—focus on the invalid expressions instead of wasting time translating your code for the Query API! And if you ever need to copy the resulting queries generated, you can do so right from your IDE (from the Error List panel).
Another common issue the MongoDB Analyzer solves is surfacing unsupported expressions and invalid queries at compile time. You’ll find all of these issues listed as warnings:
*Unsupported expressions shown as warnings in Visual Studio’s Error List*
This is quite useful as not all LINQ expressions are supported by the MongoDB C#/.NET driver. Similarly, supported expressions will differ depending on which version of LINQ you use.
## Code Samples—See the MongoDB Analyzer for .NET in Action
Now that we know what the MongoDB Analyzer can do for us, let’s see it live!
### Builder Expressions
These are a few examples that show how Builder expressions are analyzed. As you’ll see, the MongoDB Analyzer provides immediate feedback through the tooltip. Hovering over your code shows you the supported Query API language that corresponds to the query/expression you are writing.
*Builder Filter Definition - Filter movies by matching genre, score that is greater than or equal to minimum score, and a match on the title search term.*
*Builder Sort Definition - Sort movies by score (lowest to highest) and title (from Z to A).*
*Unsupported Builder Expression - Highlighted and shown as warning in Error List.*
### LINQ Queries
The MongoDB Analyzer uses the default LINQ provider of the C#/.NET driver (LINQ2). Expressions that aren’t supported in LINQ2 but are supported in LINQ3 will show the appropriate warnings, as you’ll see in one of the following examples. If you’d like to switch the LINQ provider the MongoDB Analyzer uses, set` “DefaultLinqVersion”: “V3” `in the `mongodb.analyzer.json` file.
*LINQ Filter Query - Aggregation pipeline.*
*LINQ Query - Get movie genre statistics; uses aggregation pipeline to group by and select a dynamic object.*
*Unsupported LINQ Expression - GetHashCode() method unsupported.*
*Unsupported LINQ Expression - Method referencing a lambda parameter unsupported.*
*Unsupported LINQ2, but supported LINQ3 Expression - Trim() is not supported in LINQ2, but is supported in LINQ3.*
## MongoDB Analyzer + New LINQ3 Provider = 💚
If you’d rather not see those “unsupported in LINQ2, but supported in LINQ3” warnings, now is also a good time to update to the latest MongoDB C#/.NET driver (2.14.1) which has LINQ3 support! While the full transition from LINQ2 to LINQ3 continues, you can explicitly configure your MongoClient to use the new LINQ provider like so:
```csharp
var connectionString = "mongodb://localhost";
var clientSettings = MongoClientSettings.FromConnectionString(connectionString);
clientSettings.LinqProvider = LinqProvider.V3;
var client = new MongoClient(clientSettings);
```
## Integrate MongoDB Analyzer for .NET into Your Pipelines
The MongoDB Analyzer can also be used from the CLI which means integrating this static analysis tool into your continuous integration and continuous deployment pipelines is seamless! For example, running `dotnet build` from the command line will output MongoDB Analyzer warnings to the terminal:
*Running dotnet build command outputs warnings from the MongoDB Analyzer*
Adding this as a step in your build pipeline can be a valuable gate check for your build. You’ll save yourself a potential headache and catch unsupported expressions and invalid queries much earlier.
Another idea: Output a Static Analysis Results Interchange Format (SARIF) file and use it to generate explain plans for all of your queries. SARIF is a standard, JSON-based format for the output of static analysis tools, making a SARIF file an ideal place to grab the supported queries generated by the MongoDB Analyzer.
To output a SARIF file for your project, you’ll need to add the `ErrorLog` option to your `.csproj` file. You’ll be able to find it at the root of your project (unless you’ve specified otherwise) the next time you build your project.
With this file, you can load it via a mongosh script, process the file to find and “clean” the found MongoDB Query API expressions, and generate explain plans for the list of queries. What can you do with this? A great example would be to output a build warning (or outright fail the build) if you catch any missing indexes! Adding steps like these to your build and using the information from the expain plans, you can prevent potential performance issues from ever making it to production.
## We Want to Hear From You!
With the release of the MongoDB Analyzer for .NET, we hope to speed up your development cycle and increase your productivity in three ways: 1) by making it easier for you to see how your idiomatic queries map to the MongoDB Query API, 2) by helping you spot unsupported expressions and invalid queries faster (at compile time, baby), and 3) by streamlining your development process by enabling static analysis for your MongoDB queries in your CI/CD pipelines!
We’re quite eager to see the .NET and C# communities use this tool and are even more eager to hear your feedback. The MongoDB Analyzer is ready for you to install as a NuGet package and can be added to any existing project that uses the MongoDB .NET driver. We want to continue improving this tool and that can only be done with your help. If you find any issues, are missing critical functionality, or have an edge case that the MongoDB Analyzer doesn’t fulfill, please let us know! You can alsopost in our Community Forums.
**Additional Resources**
* MongoDB Analyzer Docs | md | {
"tags": [
"C#",
".NET"
],
"pageDescription": "Say hello to the MongoDB Analyzer for .NET. This tool translates your C# queries to their MongoDB Query API equivalent and warns you of unsupported expressions and invalid queries at compile time, right in Visual Studio.",
"contentType": "News & Announcements"
} | Introducing the MongoDB Analyzer for .NET | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/jdk-21-virtual-threads | created | # Java 21: Unlocking the Power of the MongoDB Java Driver With Virtual Threads
## Introduction
Greetings, dev community! Java 21 is here, and if you're using the MongoDB Java driver, this is a ride you won't want to
miss. Increased performances and non-blocking threads are on the menu today! 🚀
In this article, we're going to take a stroll through some of the key features of Java 21 that are not just exciting
for Java devs in general but are particularly juicy for those of us pushing the boundaries with MongoDB.
## JDK 21
To begin with, let's have a look at all the features released in Java 21, also known
as JDK Enhancement Proposal (JEP).
- JEP 430: String Templates (Preview)
- JEP 431: Sequenced Collections
- JEP 439: Generational ZGC
- JEP 440: Record Patterns
- JEP 441: Pattern Matching for switch
- JEP 442: Foreign Function and Memory API (Third Preview)
- JEP 443: Unnamed Patterns and Variables (Preview)
- JEP 444: Virtual Threads
- JEP 445: Unnamed Classes and Instance Main Methods (Preview)
- JEP 446: Scoped Values (Preview)
- JEP 448: Vector API (Sixth Incubator)
- JEP 449: Deprecate the Windows 32-bit x86 Port for Removal
- JEP 451: Prepare to Disallow the Dynamic Loading of Agents
- JEP 452: Key Encapsulation Mechanism API
- JEP 453: Structured Concurrency (Preview)
## The Project Loom and MongoDB Java driver 4.11
While some of these JEPs, like deprecations, might not be the most exciting, some are more interesting, particularly these three.
- JEP 444: Virtual Threads
- JEP 453: Structured Concurrency (Preview)
- JEP 446: Scoped Values (Preview)
Let's discuss a bit more about them.
These three JEPs are closely related to the Project Loom which is an
initiative within the Java
ecosystem that introduces lightweight threads
called virtual threads. These virtual threads
simplify concurrent programming, providing a more scalable and efficient alternative to traditional heavyweight threads.
With Project Loom, developers can create thousands of virtual threads without the
typical performance overhead, making it easier to write concurrent code. Virtual threads offer improved resource
utilization and simplify code maintenance, providing a more accessible approach to managing concurrency in Java
applications. The project aims to enhance the developer experience by reducing the complexities associated with thread
management while optimizing performance.
> Since version 4.11 of the
MongoDB Java driver, virtual threads are fully
supported.
If you want more details, you can read the epic in the MongoDB Jira which
explains the motivations for this support.
You can also read more about the Java
driver’s new features
and compatibility.
## Spring Boot and virtual threads
In Spring Boot 3.2.0+, you just have to add the following property in your `application.properties` file
to enable virtual threads.
```properties
spring.threads.virtual.enabled=true
```
It's **huge** because this means that your accesses to MongoDB resources are now non-blocking — thanks to virtual threads.
This is going to dramatically improve the performance of your back end. Managing a large workload is now easier as all
the threads are non-blocking by default and the overhead of the context switching for the platform threads is almost
free.
You can read the blog post from Dan Vega to learn more
about Spring Boot and virtual threads.
## Conclusion
Java 21's recent release has unleashed exciting features for MongoDB Java driver users, particularly with the
introduction of virtual threads. Since version 4.11, these lightweight threads offer a streamlined approach to
concurrent programming, enhancing scalability and efficiency.
For Spring Boot enthusiasts, embracing virtual threads is a game-changer for backend performance, making MongoDB
interactions non-blocking by default.
Curious to experience these advancements? Dive into the future of Java development and explore MongoDB with Spring Boot
using
the Java Spring Boot MongoDB Starter in GitHub.
If you don't have one already, claim your free MongoDB cluster
in MongoDB Atlas to get started with the above repository faster.
Any burning questions? Come chat with us in the MongoDB Community Forums.
Happy coding! 🚀 | md | {
"tags": [
"MongoDB",
"Java",
"Spring"
],
"pageDescription": "Learn more about the new Java 21 release and Virtual Threads.",
"contentType": "Article"
} | Java 21: Unlocking the Power of the MongoDB Java Driver With Virtual Threads | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/graphql-apis-hasura | created | # Rapidly Build a Highly Performant GraphQL API for MongoDB With Hasura
## Introduction
In 2012, GraphQL was introduced as a developer-friendly API spec that allows clients to request exactly the data they
need, making it efficient and fast. By reducing the need for multiple requests and limiting the over-fetching of data,
GraphQL simplifies data retrieval, improving the developer experience. This leads to better applications by ensuring
more efficient data loading and less bandwidth usage, particularly important for mobile or low-bandwidth environments.
Using GraphQL — instead of REST — on MongoDB is desirable for many use cases, especially when there is a need to
simultaneously query data from multiple MongoDB instances, or when engineers need to join NoSQL data from MongoDB with
data from another source.
However, engineers are often faced with difficulties in implementing GraphQL APIs and layering them onto their MongoDB
data sources. Often, this learning curve and the maintenance overhead inhibit adoption. Hasura was designed to address
this common challenge with adopting GraphQL.
Hasura is a low-code GraphQL API solution. With Hasura, even engineers unfamiliar with GraphQL can build feature-rich
GraphQL APIs — complete with pagination, filtering, sorting, etc. — on MongoDB and dozens of other data sources in
minutes. Hasura also supports data federation, enabling developers to create a unified GraphQL API across different
databases and services. In this guide, we’ll show you how to quickly connect Hasura to MongoDB and generate a secure,
high-performance GraphQL API.
We will walk you through the steps to:
- Create a project on Hasura Cloud.
- Create a database on MongoDB Atlas.
- Connect Hasura to MongoDB.
- Generate a high-performance GraphQL API instantly.
- Try out GraphQL queries with relationships.
- Analyze query execution.
We will also go over how and why the generated API is highly performant.
At the end of this guide, you’ll be able to create your own high-performance, production-ready GraphQL API with Hasura
for your existing or new MongoDB Atlas instance.
## Guide to connecting Hasura with MongoDB
You will need a project on Hasura Cloud and a MongoDB database on Atlas to get started with the next steps.
### Create a project on Hasura Cloud
Head over
to cloud.hasura.io
to create an account or log in. Once you are on the Cloud Dashboard, navigate
to Projects and create a new project by clicking on `New Project`.
, create a project if you don’t have one, and navigate to
the `Database` page under the
Deployments section. You should see a page like the one below:
in the docs, particularly until Step 4, in case you are stuck in any of the steps above.
### Load sample dataset
Once the database deployment is complete, you might want to load some sample data for the cluster. You can do this by
heading to the `Database` tab and under the newly created Cluster, click on the `...` that opens up with an option
to `Load Sample Dataset`. This can take a few seconds.
for
high performance.
> Read more
> about how Hasura queries are efficiently compiled for high performance.
### Iterating on the API with updates to collections
As the structure of a document in a collection changes, it should be as simple as updating the Hasura metadata to add or
remove the modified fields. The schema is flexible, and you can update the logical model to get the API updates. There
are no database migrations required — just add or remove fields from the metadata to reflect in the API.
## Summary
The integration of MongoDB with Hasura’s GraphQL Engine brings a new level of efficiency and scalability to developers.
By leveraging Hasura’s ability to create a unified GraphQL API from diverse data sources, developers can quickly expose
MongoDB data over a secure, performant, and highly customizable GraphQL API.
We recommend a few resources to learn more about the integration.
- Hasura docs for MongoDB Atlas integration
- Running Hasura and MongoDB locally
- It should’ve been MongoDB all along!
Join the Hasura Discord server to engage with the Hasura community, and ask questions about
GraphQL or Hasura’s integration with MongoDB.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt41deae7313d3196d/65cd4b4108fffdec1972284c/image8.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18494d21c0934117/65cd4b400167d0749f8f9e6c/image15.jpg
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1364bb8b705997c0/65cd4b41762832af2bc5f453/image10.jpg
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt73fbb5707846963e/65cd4b40470a5a9e9bcb86ae/image16.jpg
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt31ed46e340623a82/65cd4b418a7a5153870a741b/image2.jpg
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdcb4a8993e2bd50b/65cd4b408a7a5148a90a7417/image14.jpg
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta472e64238c46910/65cd4b41faacaed48c1fce7f/image6.jpg
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdd90ff83d842fb73/65cd4b4008fffd23ea722848/image19.jpg
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt32b456930b96b959/65cd4b41f48bc2469c50fa76/image7.jpg
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc3b49b304533ed87/65cd4b410167d01c2b8f9e70/image3.jpg
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9bf1445744157bef/65cd4b4100d72eb99cf537b1/image5.jpg
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa807b7d9bee708b/65cd4b41ab4731a8b00eecbe/image11.jpg
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf44e2aa8ff1e02a1/65cd4b419333f76f83109fb3/image4.jpg
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb2722b7da74ada16/65cd4b41dccfc663efab00ae/image9.jpg
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt933bb95de2d55385/65cd4b407c5d415bdb528a1b/image20.jpg
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9cd80a46a72aee23/65cd4b4123dbef0a8bfff34c/image12.jpg
[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt03ae958f364433f6/65cd4b41670d7e0076281bbd/image13.jpg
[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5d1e291dba16fe93/65cd4b400ad03883cc882ad8/image18.jpg
[19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b71a7e3f04444ae/65cd4b4023dbeffeccfff348/image17.jpg
[20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt722abb79eaede951/65cd4b419778063874c05447/image1.png | md | {
"tags": [
"Atlas",
"GraphQL"
],
"pageDescription": "Learn how to configure and deploy a GraphQL API that uses MongoDB collections and documents with Hasura.",
"contentType": "Tutorial"
} | Rapidly Build a Highly Performant GraphQL API for MongoDB With Hasura | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/coronavirus-map-live-data-tracker-charts | created | # Coronavirus Map and Live Data Tracker with MongoDB Charts
## Updates
### November 15th, 2023
- John Hopkins University (JHU) has stopped collecting data as of March 10th, 2023.
- Here is JHU's GitHub repository.
- First data entry is 2020-01-22, last one is 2023-03-09.
- The data isn't updated anymore and is available in this cluster in readonly mode.
```
mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/
```
### August 20th, 2020
- Removed links to Thomas's dashboard as it's not supported anymore.
- Updated some Charts in the dashboard as JHU discontinued the recovered cases.
### April 21st, 2020
- MongoDB Open Data COVID-19 is now available on the new MongoDB Developer Hub.
- You can check our code samples in our Github repository.
- The JHU dataset changed again a few times. It's not really stable and it makes it complicated to build something reliable on top of this service. This is the reason why we created our more accessible version of the JHU dataset.
- It's the same data but transformed in JSON documents and available in a readonly MongoDB Cluster we built for you.
### March 24th, 2020
- Johns Hopkins University changed the dataset they release daily.
- I created a new dashboard based using the new dataset.
- My new dashboard updates **automatically every hour** as new data comes in.
## Too Long, Didn't Read
Thomas Rueckstiess and myself came up with two MongoDB Charts dashboards with the Coronavirus dataset.
> - Check out Maxime's dashboard.
> - Check out Thomas's dashboard (not supported anymore).
Here is an example of the charts we made using the Coronavirus dataset. More below and in the MongoDB Charts dashboards.
:charts]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4266-8264-d37ce88ff9fa theme=light autorefresh=3600}
:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-479c-83b2-d37ce88ffa07 theme=dark autorefresh=3600}
## Let The Data Speak
We have to make decisions at work every day.
- Should we discontinue this project?
- Should we hire more people?
- Can we invest more in this branch? How much?
Leaders make decisions. Great leaders make informed decisions, based on facts backed by data and not just based on assumptions, feelings or opinions.
The management of the Coronavirus outbreak obeys the same rules. To make the right decisions, we need accurate data.
Data about the Coronavirus is relatively easy to find. The [Johns Hopkins University has done a terrific job at gathering, cleaning and curating data from various sources. They wrote an excellent blog post which I encourage you to read.
Having data is great but it can also be overwhelming. That's why data visualisation is also very important. Data alone doesn't speak and doesn't help make informed decisions.
Johns Hopkins University also did a great job on this part because they provided this dashboard to make this data more human accessible.
This is great... But we can do even better visualisations with MongoDB Charts.
## Free Your Data With MongoDB Charts
Thomas Rueckstiess and I imported all the data from Johns Hopkins University (and we will keep importing new data as they are published) into a MongoDB database. If you are interested by the data import, you can check my Github repository.
Then we used this data to produce a dashboard to monitor the progression of the virus.
> Here is Maxime's dashboard. It's shared publicly for the greater good.
MongoDB Charts also allows you to embed easily charts within a website... or a blog post.
Here are a few of the graphs I was able to import in here with just two clicks.
:charts]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4593-8e0e-d37ce88ffa15 theme=dark autorefresh=3600}
:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-43e7-8a6d-d37ce88ffa30 theme=light autorefresh=3600}
:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-42b4-8b88-d37ce88ffa3a theme=light autorefresh=3600}
:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-44c9-87f5-d37ce88ffa34 theme=light autorefresh=3600}
:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-41a8-8106-d37ce88ffa2c theme=dark autorefresh=3600}
:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4cdc-8686-d37ce88ff9fc theme=dark autorefresh=3600}
:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-47fd-88bd-d37ce88ffa0d theme=light autorefresh=3600 width=760 height=1000}
As you can see, [MongoDB Charts is really powerful and super easy to embed.
## Participation
If you have a source of data that provides different or more accurate data about this virus. Please let me know on Twitter @MBeugnet or in the MongoDB community website. I will do my best to update this data and provide more charts.
## Sources
- MongoDB Open Data COVID-19 - Blog Post.
- MongoDB Open Data COVID-19 - Github Repo.
- Dashboard from Johns Hopkins University.
- Blog post from Johns Hopkins University.
- Public Google Spreadsheet (old version) - deprecated.
- Public Google Spreadsheet (new version) - deprecated.
- Public Google Spreadsheet (Time Series) - deprecated.
- GitHub Repository with CSV dataset from Johns Hopkins University.
- Image credit: Scientific Animations (CC BY-SA 4.0).
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how we put MongoDB Charts to use to track the global Coronavirus outbreak.",
"contentType": "Article"
} | Coronavirus Map and Live Data Tracker with MongoDB Charts | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/synchronize-mobile-applications-mongodb-atlas-google-cloud-mysql | created | # Synchronize Your Mobile Application With MongoDB Atlas and Google Cloud MySQL
Enterprises around the world are looking to modernize their existing applications. They need a streamlined way to synchronize data from devices at the Edge into their cloud data stores. Whether their goals are business growth or fending off the competition, application modernization is the primary vehicle that will help them get there.
Often the first step in this process is to move data from an existing relational database repository (like Oracle, SQL Server, DB2, or Postgres, for example) into a JSON-based flexible database in the cloud (like MongoDB, Aerospike, Couchbase, Cassandra, or DocumentDB). Sounds simple, right? I mean, really, if JSON (NoSQL) is so simple and flexible, why would data migration be hard? There must be a bunch of automated tools to facilitate this data migration, right?
Unfortunately, the answers are “Not really,” “Because data synchronization is rarely simple,” and “The available tools are often DIY-based and don’t provide nearly the level of automation required to facilitate an ongoing, large-scale, production-quality, conflict-resolved data synchronization.”
## Why is this so complex?
### Data modeling
One of the first challenges is data modeling. To effectively leverage the benefits inherent in a JSON-based schema, you need to include data modeling as part of your migration strategy. Simply flattening or de-normalizing a relational schema into nested JSON structures, or worse yet, simply moving from relational to JSON without any data modeling consideration, results in a JSON data repository that is slow, inefficient, and difficult to query. You need an intelligent data modeling platform that automatically creates the most effective JSON structures based on your application needs and the target JSON repository without requiring specialized resources like data scientists and data engineers.
### Building and monitoring pipelines
Once you’ve mapped the data, you need tools that allow you to build reliable, scalable data pipelines to move the data from the source to the target repository. Sadly, most of the tools available today are primarily DIY scripting tools that require both custom (often complex) coding to transform the data to the new schema properly and custom (often complex) monitoring to ensure that the new data pipelines are working reliably. You need a data pipeline automation and monitoring platform to move the data and ensure its quality.
### DIY is hard
This process of data synchronization, pipeline automation, and monitoring is where most application modernization projects get bogged down and/or ultimately fail. These failed projects often consume significant resources before they fail, as well as affect the overall business functionality and outcomes, and lead to missed objectives.
## CDC: MongoDB Atlas, Atlas Device Sync, and Dataworkz
Synchronizing data between edge devices and various databases can be complex. Simplifying this is our goal, and we will demonstrate how to achieve bi-directional synchronization between mobile devices at MySQL in the cloud using MongoDB Atlas Device Sync and Dataworkz.
Let's dive in.
## Prerequisites
- Accounts with MongoDB Atlas (this can be tested on free tiers), Dataworkz, and Google Cloud
- Kafka
- Debezium
## Step 1: prepare your mobile application with Atlas Device Sync
Set up a template app for this test by following the steps outlined in the docs. Once that step is complete, you will have a mobile application running locally, with automated synchronization back to MongoDB Atlas using the Atlas Device Sync SDK.
## Step 2: set up a source database and target MongoDB Atlas Collection
We used GCP in us-west1-a and Cloud MySQL for this example. Be sure to include sample data.
### Check if BinLog replication is already enabled
and dataworkz.com to create accounts and begin your automated bi-directional data synchronization journey.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt85b18bd98135559d/65c54454245ed9597d91062b/1.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0f313395e6bb7e0a/65c5447cf0270544bcea8b0f/2.jpg
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt42fc95f74b034c2c/65c54497ab9c0fe8aab945fc/3.jpg
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0bf05ceea39406ef/65c544b068e9235c39e585bc/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt852256fbd6c92a1d/65c544cf245ed9f5af910634/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6f7a22db7bfeb113/65c544f78b3a0d12277c6c70/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt732e20dad37e46e1/65c54515fb34d04d731b1a80/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc1f83c67ed8bf7e5/65c5453625aa94148c3513c5/8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcfe7cf5ed073b181/65c5454a49edef40e16a355a/9.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt99688601bdbef64a/65c5455d211bae4eaea55ad0/10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7317f754880d41ba/65c545780acbc5455311135a/11.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt690ff6f3783cb06f/65c5458b68e92372a8e585d2/12.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0810a0d139692e45/65c545a5ab9c0fdc87b9460a/13.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39b298970485dc92/65c545ba4cd37037ee70ec51/14.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6596816400af6c46/65c545d4eed32eadf6ac449d/15.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4eaf0f6c5b0ca4a5/65c545e3ff4e591910ad0ed6/16.png
[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9ffa8c37f5e26e5f/65c545fa4cd3709a7870ec56/17.png
[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1678582be8e8cc07/65c5461225aa943f393513cd/18.png
[19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd4b2f6a8e389884a/65c5462625aa94d9b93513d1/19.png
[20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbd4a6e31ba6103f6/65c5463fd4db5559ac6efc99/20.png
[21]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6e3be8f5f7f0e0f1/65c546547998dae7b86b5e4b/21.png
[22]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd2fccad909850d63/65c54669d2c66186e28430d5/22.png
[23]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted07a5674eae79e0/65c5467d8b3a0d226c7c6c7f/23.png
[24]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8b8b5ce479b8fc7b/65c5469125aa94964c3513d5/24.png | md | {
"tags": [
"Atlas",
"Google Cloud",
"Mobile",
"Kafka"
],
"pageDescription": "Learn how to set up automated, automated, bi-directional synchronization of data from mobile devices to MongoDB Atlas and Google Cloud MySQL.",
"contentType": "Tutorial"
} | Synchronize Your Mobile Application With MongoDB Atlas and Google Cloud MySQL | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/building-android-app | created | # Building an Android App
As technology and styles of work evolve, the need for apps to support mobile is as important as ever. In 2023, Android had around 70% market share, so the need for developers to understand how to develop apps for Android is vital.
In this tutorial, you will learn the basics of getting started with your first Android app, using the programming language Kotlin. Although historically, native Android apps have been written in Java, Kotlin was upgraded to the official language for Android by Google in 2019.
## Prerequisites
In order to follow along with this tutorial, you will need to have downloaded and installed Android Studio. This is the official IDE for Android development and comes with all the tools you will need to get started. Windows, MacOS, and Linux support it, as well.
> You won’t need an Android device to run your apps, thanks to the use of the Android Emulator, which comes with Android Studio. You will be guided through setup when you first open up Android Studio.
## Creating the project
The first step is to create the application. You can do this from the “Welcome to Android Studio” page that appears when you open Android Studio for the first time.
> If you have opened it before and don’t see this window but instead a list of recent projects, you can create a new project from the **File** menu.
1. Click **New Project**, which starts a wizard to guide you through
creating a new project.
2. In the **Templates** window, make sure the **Phone and Tablet** option is selected on the left. Select Empty Activity and then click
Next.
3. Give your project a name. I chose "Hello Android".
For Package name, this can be left as the default, if you want. In the future, you might update it to reflect your company name, making sure to leave the app name on the end. I know the backward nature of the Package name can seem confusing but it is just something to note, if you update it.
Minimum SDK: If you make an app in the future intended for users, you might choose an earlier version of Android to support more devices, but this isn’t necessary for this tutorial, so update it to a newer version. I went with API 33, aka “Tiramisu.” Android gives all their operating system (OS) versions names shared with sweet things, all in alphabetical order.
> Fun fact: I created my first ever Android app back when the OS version was nicknamed Jelly Bean!
You can leave the other values as default, although you may choose to update the **Save** location. Once done, press the **Finish** button.
It will then open your new application inside the main Android Studio window. It can take a minute or two to index and build everything, so if you don’t see much straight away, don’t worry. Once completed, you will see the ```MainActivity.kt``` file open in the editor and the project structure on the left.
## Running the app for the first time
Although we haven’t made any code changes yet, the Empty Activity template comes with a basic UI already. So let’s run our app and see what it looks like out of the box.
1. Select the **Run** button that looks like a play button at the top of the Android Studio window. Or you can select the hamburger menu in the top left and go to **Run -> Run ‘app’**.
2. Once it has been built and deployed, you will see it running in the Running Devices area to the right of the editor window. Out of the box, it just says “Hello Android.”
Congratulations! You have your first running native Android app!
## Updating the UI
Now your app is running, let’s take a look at how to make some changes to the UI.
Android supports two types of UI: XML-based layouts and Jetpack Compose, known as Compose. Compose is now the recommended solution for Android, and this is what our new app is using, so we will continue to use it.
Compose uses composable functions to define UI components. You can see this in action in the I’m code inside ```MainActivity.kt``` where there is a function called ```Greeting``` with the attribute ```@Composable```. It takes in a string for a name and modifier and uses those inside a text element.
We are going to update the greeting function to now include a way to enter some text and a button to click that will update the label to say “Hello” to the name you enter in the text box.
Replace the existing code from ```class MainActivity : ComponentActivity() {``` onward with the following:
```kotlin
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
HelloAndroidTheme {
// A surface container using the 'background' color from the theme
Surface(
modifier = Modifier.fillMaxSize(),
color = MaterialTheme.colorScheme.background
) {
Greeting()
}
}
}
}
}
@Composable
fun Greeting() {
var message by remember { mutableStateOf("")}
var greeting by remember { mutableStateOf("") }
Column (Modifier.padding(16.dp)) {
TextField(
value = message,
onValueChange = { message = it },
label = {Text("Enter your name..")}
)
Button(onClick = { greeting = "Hello, $message" }) {
Text("Say Hello")
}
Text(greeting)
}
}
@Preview(showBackground = true)
@Composable
fun GreetingPreview() {
HelloAndroidTheme {
Greeting()
}
}
```
Let’s now take a look at what has changed.
### OnCreate
We have removed the passing of a hardcoded name value here as a parameter to the Greeting function, as we will now get that from the text box.
### Greeting
We have added two function-scoped variables here for holding the values we want to update dynamically.
We then start defining our components. Now we have multiple components visible, we want to apply some layout and styling to them, so we have created a column so the three sub-components appear vertically. Inside the column definition, we also pass padding of 16dp.
Our column layout contains a TextField for entering text. The value property is linked to our message variable. The onValueChanged property says that when the value of the box is changed, assign it to the message variable so it is always up to date. It also has a label property, which acts as a placeholder hint to the user.
Next is the button. This has an onClick property where we define what happens when the button is clicked. In this case, it sets the value of the greeting variable to be “Hello,” plus the message.
Lastly, we have a text component to display the greeting. Each time the button is clicked and the greeting variable is updated, that text field will update on the screen.
### GreetingPreview
This is a function that allows you to preview your UI without running it on a device or emulator. It is very similar to the OnCreate function above where it specifies the default HelloAndroidTheme and then our Greeting component.
If you want to view the preview of your code, you can click the button in the top right corner of the editor window, to the left of the **Running Devices** area that shows a hamburger icon with a rectangle with rounded corners next to it. This is the split view button. It splits the view between your code and the preview.
### Imports
If Android Studio is giving you error messages in the code, it might be because you are missing some import statements at the top of the file.
Expand the imports section at the top and replace it with the following:
```kotlin
import android.os.Bundle
import androidx.activity.ComponentActivity
import androidx.activity.compose.setContent
import androidx.compose.foundation.layout.Column
import androidx.compose.foundation.layout.fillMaxSize
import androidx.compose.foundation.layout.padding
import androidx.compose.material3.Button
import androidx.compose.material3.MaterialTheme
import androidx.compose.material3.Surface
import androidx.compose.material3.Text
import androidx.compose.material3.TextField
import androidx.compose.runtime.Composable
import androidx.compose.runtime.getValue
import androidx.compose.runtime.mutableStateOf
import androidx.compose.runtime.remember
import androidx.compose.runtime.setValue
import androidx.compose.ui.Modifier
import androidx.compose.ui.text.input.TextFieldValue
import androidx.compose.ui.tooling.preview.Preview
import androidx.compose.ui.unit.dp
import com.mongodb.helloandroid.ui.theme.HelloAndroidTheme
```
You will need to update the last import statement to make sure that your package name matches as it may not be com.mongodb.helloandroid, for example.
## Testing the app
Now that we have updated the UI, let’s run it and see our shiny new UI. Click the **Run** button again and wait for it to deploy to the emulator or your device, if you have one connected.
Try playing around with what you enter and pressing the button to see the result of your great work!
## Summary
There you have it, your first Android app written in Kotlin using Android Studio, just like that! Compose makes it super easy to create UIs in no time at all.
If you want to take it further, you might want to add the ability to store information that persists between app sessions. MongoDB has an amazing product, called Atlas Device Sync, that allows you to store data on the device for your app and have it sync to MongoDB Atlas. You can read more about this and how to get started in our Kotlin Docs.
| md | {
"tags": [
"Realm",
"Kotlin",
"Mobile",
"Jetpack Compose",
"Android"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Building an Android App | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/rag-with-polm-stack-llamaindex-openai-mongodb | created | # How to Build a RAG System With LlamaIndex, OpenAI, and MongoDB Vector Database
## Introduction
Large language models (LLMs) substantially benefit business applications, especially in use cases surrounding productivity. Although LLMs and their applications are undoubtedly advantageous, relying solely on the parametric knowledge of LLMs to respond to user inputs and prompts proves insufficient for private data or queries dependent on real-time data. This is why a non-parametric secure knowledge source that holds sensitive data and can be updated periodically is required to augment user inputs to LLMs with current and relevant information.
**Retrieval-augmented generation (RAG) is a system design pattern that leverages information retrieval techniques and generative AI models to provide accurate and relevant responses to user queries by retrieving semantically relevant data to supplement user queries with additional context, combined as input to LLMs**.
. The content of the following steps explains in some detail library classes, methods, and processes that are used to achieve the objective of implementing a RAG system.
## Step 1: install libraries
The code snippet below installs various libraries that will provide functionalities to access LLMs, reranking models, databases, and collection methods, abstracting complexities associated with extensive coding into a few lines and method calls.
- **LlamaIndex** : data framework that provides functionalities to connect data sources (files, PDFs, website or data source) to both closed (OpenAI, Cohere) and open source (Llama) large language models; the LlamaIndex framework abstracts complexities associated with data ingestion, RAG pipeline implementation, and development of LLM applications (chatbots, agents, etc.).
- **LlamaIndex (MongoDB**): LlamaIndex extension library that imports all the necessary methods to connect to and operate with the MongoDB Atlas database.
- **LlamaIndex (OpenAI**): LlamaIndex extension library that imports all the necessary methods to access the OpenAI embedding models.
- **PyMongo:** a Python library for interacting with MongoDB that enables functionalities to connect to a cluster and query data stored in collections and documents.
- **Hugging Face datasets:** Hugging Face library holds audio, vision, and text datasets.
- **Pandas** : provides data structure for efficient data processing and analysis using Python.
```shell
!pip install llama-index
!pip install llama-index-vector-stores-mongodb
!pip install llama-index-embeddings-openai
!pip install pymongo
!pip install datasets
!pip install pandas
```
## Step 2: data sourcing and OpenAI key setup
The command below assigns an OpenAI API key to the environment variable OPENAI\_API\_KEY. This ensures LlamaIndex creates an OpenAI client with the provided OpenAI API key to access features such as LLM models (GPT-3, GPT-3.5-turbo, and GPT-4) and embedding models (text-embedding-ada-002, text-embedding-3-small, and text-embedding-3-large).
```
%env OPENAI\_API\_KEY=openai\_key\_here
```
The data utilised in this tutorial is sourced from Hugging Face datasets, specifically the AIatMongoDB/embedded\_movies dataset. A datapoint within the movie dataset contains information corresponding to a particular movie; plot, genre, cast, runtime, and more are captured for each data point. After loading the dataset into the development environment, it is converted into a Pandas data frame object, which enables data structure manipulation and analysis with relative ease.
``` python
from datasets import load_dataset
importpandasaspd
# https://huggingface.co/datasets/AIatMongoDB/embedded\_movies
dataset=load_dataset("AIatMongoDB/embedded\_movies")
# Convert the dataset to a pandas dataframe
dataset_df=pd.DataFrame(dataset'train'])
dataset_df.head(5)
```
## Step 3: data cleaning, preparation, and loading
The operations within this step focus on enforcing data integrity and quality. The first process ensures that each data point's ```plot``` attribute is not empty, as this is the primary data we utilise in the embedding process. This step also ensures we remove the ```plot_embedding``` attribute from all data points as this will be replaced by new embeddings created with a different model, the ```text-embedding-3-small```.
``` python
# Remove data point where plot column is missing
dataset_df=dataset_df.dropna(subset=['plot'])
print("\nNumber of missing values in each column after removal:")
print(dataset_df.isnull().sum())
# Remove the plot_embedding from each data point in the dataset as we are going to create new embeddings with the new OpenAI embedding Model "text-embedding-3-small"
dataset_df=dataset_df.drop(columns=['plot_embedding'])
dataset_df.head(5)
```
An embedding object is initialised from the ```OpenAIEmbedding``` model, part of the ```llama_index.embeddings``` module. Specifically, the ```OpenAIEmbedding``` model takes two parameters: the embedding model name, ```text-embedding-3-small``` for this tutorial, and the dimensions of the vector embedding.
The code snippet below configures the embedding model, and LLM utilised throughout the development environment. The LLM utilised to respond to user queries is the default OpenAI model enabled via LlamaIndex and is initialised with the ```OpenAI()``` class. To ensure consistency within all consumers of LLMs and their configuration, LlamaIndex provided the "Settings" module, which enables a global configuration of the LLMs and embedding models utilised in the environment.
```python
from llama_index.core.settings import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
embed_model=OpenAIEmbedding(model="text-embedding-3-small",dimensions=256)
llm=OpenAI()
Settings.llm=llm
Settings.embed_model=embed_model
```
Next, it's crucial to appropriately format the dataset and its contents for MongoDB ingestion. In the upcoming steps, we'll transform the current structure of the dataset, ```dataset_df``` — presently a DataFrame object — into a JSON string. This dataset conversion is done in the line ```documents = dataset_df.to_json(orient='records')```, which assigns the JSON format to the documents.
By specifying orient='records', each row of the DataFrame is converted into a separate JSON object.
The following step creates a list of Python dictionaries, ```documents_list```, each representing an individual record from the original DataFrame. The final step in this process is to convert each dictionary into manually constructed documents, which are first-class citizens that hold information extracted from a data source. Documents within LlamaIndex hold information, such as metadata, that is utilised in downstream processing and ingestion stages in a RAG pipeline.
One important point to note is that when creating a LlamaIndex document manually, it's possible to configure the attributes of the documents that are utilised when passed as input to embedding models and LLMs. The ```excluded_llm_metadata_keys``` and ```excluded_embed_metadata_keys``` arguments on the document class constructor take a list of attributes to ignore when generating inputs for downstream processes within a RAG pipeline. A reason for doing this is to limit the context utilised within embedding models for more relevant retrievals, and in the case of LLMs, this is used to control the metadata information combined with user queries. Without configuration of either of the two arguments, a document by default utilises all content in its metadata as embedding and LLM input.
At the end of this step, a Python list contains several documents corresponding to each data point in the preprocessed dataset.
```python
import json
from llama_index.core import Document
from llama_index.core.schema import MetadataMode
# Convert the DataFrame to a JSON string representation
documents_json = dataset_df.to_json(orient='records')
# Load the JSON string into a Python list of dictionaries
documents_list = json.loads(documents_json)
llama_documents = []
for document in documents_list:
# Value for metadata must be one of (str, int, float, None)
document["writers"] = json.dumps(document["writers"])
document["languages"] = json.dumps(document["languages"])
document["genres"] = json.dumps(document["genres"])
document["cast"] = json.dumps(document["cast"])
document["directors"] = json.dumps(document["directors"])
document["countries"] = json.dumps(document["countries"])
document["imdb"] = json.dumps(document["imdb"])
document["awards"] = json.dumps(document["awards"])
# Create a Document object with the text and excluded metadata for llm and embedding models
llama_document = Document(
text=document["fullplot"],
metadata=document,
excluded_llm_metadata_keys=["fullplot", "metacritic"],
excluded_embed_metadata_keys=["fullplot", "metacritic", "poster", "num_mflix_comments", "runtime", "rated"],
metadata_template="{key}=>{value}",
text_template="Metadata: {metadata_str}\n-----\nContent: {content}",
)
llama_documents.append(llama_document)
# Observing an example of what the LLM and Embedding model receive as input
print(
"\nThe LLM sees this: \n",
llama_documents[0].get_content(metadata_mode=MetadataMode.LLM),
)
print(
"\nThe Embedding model sees this: \n",
llama_documents[0].get_content(metadata_mode=MetadataMode.EMBED),
)
```
The final step of processing before ingesting the data to the MongoDB vector store is to convert the list of LlamaIndex documents into another first-class citizen data structure known as nodes. Once we have the nodes generated from the documents, the next step is to generate embedding data for each node using the content in the text and metadata attributes.
```
from llama_index.core.node_parser import SentenceSplitter
parser = SentenceSplitter()
nodes = parser.get_nodes_from_documents(llama_documents)
for node in nodes:
node_embedding = embed_model.get_text_embedding(
node.get_content(metadata_mode="all")
)
node.embedding = node_embedding
```
## Step 4: database setup and connection
Before moving forward, ensure the following prerequisites are met:
- Database cluster setup on MongoDB Atlas
- Obtained the URI to your cluster
For assistance with database cluster setup and obtaining the URI, refer to our guide for [setting up a MongoDB cluster, and our guide to get your connection string. Alternatively, follow Step 5 of this article on using embeddings in a RAG system, which offers detailed instructions on configuring and setting up the database cluster.
Once you have successfully created a cluster, create the database and collection within the MongoDB Atlas cluster by clicking **+ Create Database**. The database will be named `movies`, and the collection will be named `movies_records`.
.
In the creation of a vector search index using the JSON editor on MongoDB Atlas, ensure your vector search index is named ```vector_index``` and the vector search index definition is as follows:
```json
{
"fields":
{
"numDimensions": 256,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}
]
}
```
After setting up the vector search index, data can be ingested and retrieved efficiently. Data ingestion is a trivial process achieved with less than three lines when leveraging LlamaIndex.
## Step 6: data ingestion to vector database
Up to this point, we have successfully done the following:
- Loaded data sourced from Hugging Face
- Provided each data point with embedding using the OpenAI embedding model
- Set up a MongoDB database designed to store vector embeddings
- Established a connection to this database from our development environment
- Defined a vector search index for efficient querying of vector embeddings
The code snippet below also initialises a MongoDB Atlas vector store object via the LlamaIndex constructor ```MongoDBAtlasVectorSearch```. It's important to note that in this step, we reference the name of the vector search index previously created via the MongoDB Cloud Atlas interface. For this specific use case, the index name is ```vector_index```.
The crucial method that executes the ingestion of nodes into a specified vector store is the .add() method of the LlamaIndex MongoDB instance.
```python
from llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch
vector_store = MongoDBAtlasVectorSearch(mongo_client, db_name=DB_NAME, collection_name=COLLECTION_NAME, index_name="vector_index")
vector_store.add(nodes)
```
The last line in the code snippet above creates a LlamaIndex index. Within LlamaIndex, when documents are loaded into any of the index abstraction classes — ```SummaryIndex```, ``TreeIndex```, ```KnowledgeGraphIndex```, and especially ```VectorStoreIndex``` — an index that stores a representation of the original document is built in an in-memory vector store that also stores embeddings.
But since the MongoDB Atlas vector database is utilised in this RAG system to store the embeddings and also the index for our document, LlamaIndex enables the retrieval of the index from Atlas via the ```from_vector_store``` method of the ```VectorStoreIndex``` class.
```python
from llama_index.core import VectorStoreIndex, StorageContext
index = VectorStoreIndex.from_vector_store(vector_store)
```
## Step 7: handling user queries
The next step involves creating a LlamaIndex query engine. The query engine enables the functionality to utilise natural language to retrieve relevant, contextually appropriate information from a data index. The ```as_query_engine``` method provided by LlamaIndex abstracts the complexities of AI engineers and developers writing the implementation code to process queries appropriately for extracting information from a data source.
For our use case, the query engine satisfies the requirement of building a question-and-answer application. However, LlamaIndex does provide the ability to construct a chat-like application with the [Chat Engine functionality.
``` python
import pprint
from llama_index.core.response.notebook_utils import display_response
query_engine = index.as_query_engine(similarity_top_k=3)
query = "Recommend a romantic movie suitable for the christmas season and justify your selecton"
response = query_engine.query(query)
display_response(response)
pprint.pprint(response.source_nodes)
```
----------
## Conclusion
Incorporating RAG architectural design patterns improves LLM performance within modern generative AI applications and introduces a cost-conscious approach to building robust AI infrastructure. Building a robust RAG system with minimal code implementation with components such as MongoDB as a vector database and LlamaIndex as the LLM orchestrator is a straightforward process, as this article demonstrates.
In particular, this tutorial covered the implementation of a RAG system that leverages the combined capabilities of Python, OpenAI, LlamaIndex, and the MongoDB vector database, also known as the POLM AI stack.
It should be mentioned that fine-tuning is still a viable strategy for improving the capabilities of LLMs and updating their parametric knowledge. However, for AI engineers who consider the economics of building and maintaining GenAI applications, exploring cost-effective methods that improve LLM capabilities is worth considering, even if it is experimental.
The associated cost of data sourcing, the acquisition of hardware accelerators, and the domain expertise needed for fine-tuning LLMs and foundation models often entail significant investment, making exploring more cost-effective methods, such as RAG systems, an attractive alternative.
Notably, the cost implications of fine-tuning and model training underscore the need for AI engineers and developers to adopt a cost-saving mindset from the early stages of an AI project. Most applications today already, or will, have some form of generative AI capability supported by an AI infrastructure. To this point, it becomes a key aspect of an AI engineer's role to communicate and express the value of exploring cost-effective solutions to stakeholders and key decision-makers when developing AI infrastructure.
All code presented in this article is presented on GitHub. Happy hacking.
----------
## FAQ
**Q: What is a retrieval-augmented generation (RAG) system?**
Retrieval-augmented generation (RAG) is a design pattern that improves the capabilities of LLMs by using retrieval models to fetch semantically relevant information from a database. This additional context is combined with the user's query to generate more accurate and relevant responses from LLMs.
**Q: What are the key components of an AI stack in a RAG system?**
The essential components include models (like GPT-3.5, GPT-4, or Llama), orchestrators or integrators for managing interactions between LLMs and data sources, and operational and vector databases for storing and retrieving data efficiently.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2d3edefc63969c9e/65cf3ec38d55b016fb614064/GenAI_Stack_(4).png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte6e94adc39a972d2/65cf3fe80b928c05597cf436/GenAI_Stack_(3).png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt304223ce674c707c/65cf4262e52e7542df43d684/GenAI_Stack_(5).png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt62d948b4a9813c34/65cf442f849f316aeae97372/GenAI_Stack_(6).png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt43cd95259d274718/65cf467b77f34c1fccca337e/GenAI_Stack_(7).png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "This article provides an in-depth tutorial on building a Retrieval-Augmented Generation (RAG) system using the combined capabilities of Python, OpenAI, LlamaIndex, and MongoDB's vector database, collectively referred to as the POLM AI stack.",
"contentType": "Tutorial"
} | How to Build a RAG System With LlamaIndex, OpenAI, and MongoDB Vector Database | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/why-use-mongodb-with-ruby | created | # Why Use MongoDB with Ruby
Before discovering Ruby and Ruby on Rails, I was a .NET developer. At that time, I'd make ad-hoc changes to my development database, export my table/function/stored procedure/view definitions to text files, and check them into source control with any code changes. Using `diff` functionality, I'd compare the schema changes that the DBAs needed to apply to production and we'd script that out separately.
I'm sure better tools existed (and I eventually started using some of RedGate's tools), but I was looking for a change. At that time, the real magic of Ruby on Rails for me was the Active Record Migrations which made working with my database fit with my programming workflow. Schema management became less of a chore and there were `rake` tasks for anything I needed (applying migrations, rolling back changes, seeding a test database).
Schema versioning and management with Rails was leaps and bounds better than what I was used to, and I didn't think this could get any better — but then I found MongoDB.
When working with MongoDB, there's no need to `CREATE TABLE foo (id integer, bar varchar(255), ...)`; if a collection (or associated database) doesn't exist, inserting a new document will automatically create it for you. This means Active Record migrations are no longer needed as this level of schema change management was no longer necessary.
Having the flexibility to define my data model directly within the code without needing to resort to the intermediary management that Active Record had facilitated just sort of made sense to me. I could now persist object state to my database directly, embed related model details, and easily form queries around these structures to quickly retrieve my data.
## Flexible schema
Data in MongoDB has a flexible schema as collections do not enforce a strict document structure or schema by default. This flexibility gives you data-modeling choices to match your application and its performance requirements, which aligns perfectly with Ruby's focus on simplicity and productivity.
## Let's try it out
We can easily demonstrate how to quickly get started using the MongoDB Ruby Driver using the following simple Ruby script that will connect to a cluster, insert a document, and read it back:
```ruby
require 'bundler/inline'
gemfile do
source 'https://rubygems.org'
gem 'mongo'
end
client = Mongo::Client.new('mongodb+srv://username:password@mycluster.mongodb.net/test')
collection = client:foo]
collection.insert_one({ bar: "baz" })
puts collection.find.first
# => {"_id"=>BSON::ObjectId('62d83d9dceb023b20aff228a'), "bar"=>"baz"}
```
When the document above is inserted, an `_id` value of `BSON::ObjectId('62d83d9dceb023b20aff228a')` is created. All documents must have an [`_id` field. However, if not provided, a default `_id` of type `ObjectId` will be generated. When running the above, you will get a different value for `_id`, or you may choose to explicitly set it to any value you like!
Feel free to give the above example a spin using your existing MongoDB cluster or MongoDB Atlas cluster. If you don't have a MongoDB Atlas cluster, sign up for an always free tier cluster to get started.
## Installation
The MongoDB Ruby Driver is hosted at RubyGems, or if you'd like to explore the source code, it can be found on GitHub.
To simplify the example above, we used `bundler/inline` to provide a single-file solution using Bundler. However, the `mongo` gem can be just as easily added to an existing `Gemfile` or installed via `gem install mongo`.
## Basic CRUD operations
Our sample above demonstrated how to quickly create and read a document. Updating and deleting documents are just as painless as shown below:
```ruby
# set a new field 'counter' to 1
collection.update_one({ _id: BSON::ObjectId('62d83d9dceb023b20aff228a')}, :"$set" => { counter: 1 })
puts collection.find.first
# => {"_id"=>BSON::ObjectId('62d83d9dceb023b20aff228a'), "bar"=>"baz", "counter"=>1}
# increment the field 'counter' by one
collection.update_one({ _id: BSON::ObjectId('62d83d9dceb023b20aff228a')}, :"$inc" => { counter: 1 })
puts collection.find.first
# => {"_id"=>BSON::ObjectId('62d83d9dceb023b20aff228a'), "bar"=>"baz", "counter"=>2}
# remove the test document
collection.delete_one({ _id: BSON::ObjectId('62d83d9dceb023b20aff228a') })
```
## Object document mapper
Though all interaction with your Atlas cluster can be done directly using the MongoDB Ruby Driver, most developers prefer a layer of abstraction such as an ORM or ODM. Ruby developers can use the Mongoid ODM to easily model MongoDB collections in their code and simplify interaction using a fluid API akin to Active Record's Query Interface.
The following example adapts the previous example to use Mongoid:
```ruby
require 'bundler/inline'
gemfile do
source 'https://rubygems.org'
gem 'mongoid'
end
Mongoid.configure do |config|
config.clients.default = { uri: "mongodb+srv://username:password@mycluster.mongodb.net/test" }
end
class Foo
include Mongoid::Document
field :bar, type: String
field :counter, type: Integer, default: 1
end
# create a new instance of 'Foo', which will assign a default value of 1 to the 'counter' field
foo = Foo.create bar: "baz"
puts foo.inspect
# =>
# interact with the instance variable 'foo' and modify fields programmatically
foo.counter += 1
# save the instance of the model, persisting changes back to MongoDB
foo.save!
puts foo.inspect
# =>
```
## Summary
Whether you're using Ruby/Rails to build a script/automation tool, a new web application, or even the next Coinbase, MongoDB has you covered with both a Driver that simplifies interaction with your data or an ODM that seamlessly integrates your data model with your application code.
## Conclusion
Interacting with your MongoDB data via Ruby — either using the Driver or the ODM — is straightforward, but you can also directly interface with your data from MongoDB Atlas using the built in Data Explorer. Depending on your preferences though, there are options:
* MongoDB for Visual Studio Code allows you to connect to your MongoDB instance and enables you to interact in a way that fits into your native workflow and development tools. You can navigate and browse your MongoDB databases and collections, and prototype queries and aggregations for use in your applications.
* MongoDB Compass is an interactive tool for querying, optimizing, and analyzing your MongoDB data. Get key insights, drag and drop to build pipelines, and more.
* Studio 3T is an extremely easy to use 3rd party GUI for interacting with your MongoDB data.
* MongoDB Atlas Data API lets you read and write data in Atlas with standard HTTPS requests. To use the Data API, all you need is an HTTPS client and a valid API key.
Ruby was recently added as a language export option to both MongoDB Compass and the MongoDB VS Code Extension. Using this integration you can easily convert an aggregation pipeline from either tool into code you can copy/paste into your Ruby application. | md | {
"tags": [
"MongoDB",
"Ruby"
],
"pageDescription": "Find out what makes MongoDB a great fit for your next Ruby on Rails application! ",
"contentType": "Article"
} | Why Use MongoDB with Ruby | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-rivet-graph-ai-integ | created | # Building AI Graphs with Rivet and MongoDB Atlas Vector Search to Power AI Applications
## Introduction
In the rapidly advancing realm of database technology and artificial intelligence, the convergence of intuitive graphical interfaces and powerful data processing tools has created a new horizon for developers and data scientists. MongoDB Compass, with its rich features and user-friendly design, stands out as a flagship database management tool. The integration of AI capabilities, such as those provided by Rivet AI's graph builder, pushes the envelope further, offering unprecedented ease and efficiency in managing and analyzing data.
This article delves into the synergy between MongoDB Atlas, a database as a service, and Rivet AI’s graph builder, exploring how this integration facilitates the visualization and manipulation of data. Rivet is a powerful tool developed by Ironclad, a partner that together with MongoDB wishes to make AI flows as easy and intuitive as possible.
We will dissect the high-level architecture that allows users to interact with their database in a more dynamic and insightful manner, thereby enhancing their ability to make data-driven decisions swiftly.
Make sure to visit our Github repository for sample codes and a test run of this solution.
## High-level architecture
The high-level architecture of the MongoDB Atlas and Rivet AI graph builder integration is centered around a seamless workflow that caters to both the extraction of data and its subsequent analysis using AI-driven insights.
----------
**Data extraction and structuring**: At the core of the workflow is the ability to extract and structure data within the MongoDB Atlas database. Users can define and manipulate documents and collections, leveraging MongoDB's flexible schema model. The MongoDB Compass interface allows for real-time querying and indexing, making the retrieval of specific data subsets both intuitive and efficient.
**AI-enhanced analysis**: Once the data is structured, Rivet AI’s graph builder comes into play. It provides a visual representation of operations such as object path extraction, which is crucial for understanding the relationships within the data. The graph builder enables the construction of complex queries and data transformations without the need to write extensive code.
**Vectorization and indexing**: A standout feature is the ability to transform textual or categorical data into vector form using AI, commonly referred to as embedding. These embeddings capture the semantic relationships between data points and are stored back in MongoDB. This vectorization process is pivotal for performing advanced search operations, such as similarity searches and machine learning-based predictions.
**Interactive visualization**: The entire process is visualized interactively through the graph builder interface. Each operation, from matching to embedding extraction and storage, is represented as nodes in a graph, making the data flow and transformation steps transparent and easy to modify.
**Search and retrieval**: With AI-generated vectors stored in MongoDB, users can perform sophisticated search queries. Using techniques like k-nearest neighbors (k-NN), the system can retrieve documents that are semantically close to a given query, which is invaluable for recommendation systems, search engines, and other AI-driven applications.
----------
## Installation steps
**Install Rivet**: To begin using Rivet, visit the official Rivet installation page and follow the instructions to download and install the Rivet application on your system.
**Obtain an OpenAI API key**: Rivet requires an OpenAI API key to access certain AI features. Register for an OpenAI account if you haven't already, and navigate to the API section to generate your key.
**Configure Rivet with OpenAI**: After installing Rivet, open the application and navigate to the settings. Enter your OpenAI API key in the OpenAI settings section. This will allow you to use OpenAI's features within Rivet.
**Install the MongoDB plugin in Rivet**: Within Rivet, go to the plugins section and search for the MongoDB plugin. Install the plugin to enable MongoDB functionality within Rivet. This will involve entering your MongoDB Atlas connection string to connect to your database.
**Connect Rivet to MongoDB Atlas**: Once your Atlas Search index is configured, return to Rivet and use the MongoDB plugin to connect to your MongoDB Atlas cluster by providing the necessary connection string and credentials.
Get your Atlas cluster connection string and place under "Settings" => "Plugins":
## Setup steps
**Set up MongoDB Atlas Search**: Log in to your MongoDB Atlas account and select the cluster where your collection resides. Use MongoDB Compass to connect to your cluster and navigate to the collection you want to index.
**Create a search index in Compass**: In Compass, click on the "Indexes" tab within your collection view. Create a new search index by selecting the "Create Index" option. Choose the fields you want to index, and configure the index options according to your search requirements.
Example:
```json
{
"name": "default",
"type": "vectorSearch",
"fields":
{
"type": "vector",
"path": "embedding",
"numDimensions": 1536,
"similarity": "dotProduct"
}]
}
```
**Build and execute queries**: With the setup complete, you can now build queries in Rivet to retrieve and manipulate data in your MongoDB Atlas collection using the search index you created.
By following these steps, you'll be able to harness the power of MongoDB Atlas Search with the advanced AI capabilities provided by Rivet. Make sure to refer to the official documentation for detailed instructions and troubleshooting tips.
## Simple example of storing and retrieving graph data
### Storing data
In this example, we have a basic Rivet graph that processes data to be stored in a MongoDB database using the `rivet-plugin-mongodb`. The graph follows these steps:
![Store Embedding and documents using Rivet
**Extract object path**: The graph starts with an object containing product information — for example, { "product": "shirt", "color": "green" }. This data is then passed to a node that extracts specific information based on the object path, such as $.color, to be used in further processing.
**Get embedding**: The next node in the graph, labeled 'GET EMBEDDING', uses the OpenAI service to generate an embedding vector from the input data. This embedding represents the extracted feature (in this case, the color attribute) in a numerical form that can be used for machine learning or similarity searches.
**Store vector in MongoDB**: The resulting embedding vector is then sent to the 'STORE VECTOR IN MONGODB' node. This node is configured with the database name search and collection products, where it stores the embedding in a field named embedding. The operation completes successfully, as indicated by the 'COMPLETE' status.
**In MongoDB Compass**, we see the following actions and configurations:
**Index creation**: Under the search.products index, a new index is created for the embedding field. This index is configured for vector searches, with 1536 dimensions and using the `DotProduct` similarity measure. This index is of the type “knnVector,” which is suitable for k-nearest neighbors searches.
**Atlas Search index**: The bottom right corner of the screenshot shows the MongoDB Compass interface for editing the “default” index. The provided JSON configuration sets up the index for Atlas Search, with dynamic field mappings.
With this graph and MongoDB set up, the Rivet application is capable of storing vector data in MongoDB and performing efficient vector searches using MongoDB's Atlas Search feature. This allows users to quickly retrieve documents based on the similarity of vector data, such as finding products with similar characteristics.
### Retrieving data
In this Rivet graph setup, we see the process of creating an embedding from textual input and using it to perform a vector search within a MongoDB database:
**Text input**: The graph starts with a text node containing the word "forest." This input could represent a search term or a feature of interest.
**Get embedding**: The 'GET EMBEDDING' node uses OpenAI's service to convert the text input into a numerical vector. This vector has a length of 1536, indicating the dimensionality of the embedding space.
**Search MongoDB for closest vectors with KNN**: With the embedding vector obtained, the graph then uses a node labeled “SEARCH MONGODB FOR CLOSEST VECTORS WITH KNN.” This node is configured with the following parameters:
```
Database: search
Collection: products
Path: embedding
K: 1
```
This configuration indicates that the node will perform a k-nearest neighbor search to find the single closest vector within the products collection of the search database, comparing against the embedding field of the documents stored there.
Different colors and their associated embeddings. Each document contains an embedding array, which is compared against the input vector to find the closest match based on the chosen similarity measure (not shown in the image).
### Complex graph workflow for an enhanced grocery shopping experience using MongoDB and embeddings
This section delves into a sophisticated workflow that leverages Rivet's graph processing capabilities, MongoDB's robust searching features, and the power of machine learning embeddings. To facilitate that, we have used a workflow demonstrated in another tutorial: AI Shop with MongoDB Atlas. Through this workflow, we aim to transform a user's grocery list into a curated selection of products, optimized for relevance and personal preferences. This complex graph workflow not only improves user engagement but also streamlines the path from product discovery to purchase, thus offering an enhanced grocery shopping experience.
### High-level flow overview
**Graph input**: The user provides input, presumably a list of items or recipes they want to purchase.
**Search MongoDB collection**: The graph retrieves the available categories as a bounding box to the engineered prompt.
**Prompt creation**: A prompt is generated based on the user input, possibly to refine the search or interact with the user for more details.
**Chat interaction**: The graph accesses OpenAI chat capabilities to produce an AI-based list of a structured JSON.
**JSON extraction and object path extraction**: The relevant data is extracted from the JSON response of the OpenAI Chat.
**Embedding generation**: The data is then processed to create embeddings, which are high-dimensional representations of the items.
**Union of searches**: These embeddings are used to create a union of $search queries in MongoDB, which allows for a more sophisticated search mechanism that can consider multiple aspects of the items, like similarity in taste, price range, or brand preference.
**Graph output**: The built query is outputted back from the graph.
### Detailed breakdown
**Part 1: Input to MongoDB Search**
The user input is taken and used to query the MongoDB collection directly. A chat system might be involved to refine this query or to interact with the user. The result of the query is then processed to extract relevant information using JSON and object path extraction methods.
**Part 2: Embedding to union of searches**
The extracted object from Part 1 is taken and an embedding is generated using OpenAI's service. This embedding is used to create a more complex MongoDB $search query. The code node likely contains the logic to perform an aggregation query in MongoDB that uses the generated embeddings to find the best matches. The output is then formatted, possibly as a list of grocery items that match the user's initial input, enriched by the embeddings.
This graph demonstrates a sophisticated integration of natural language processing, database querying, and machine learning embedding techniques to provide a user with a rich set of search results. It takes simple text input and transforms it into a detailed query that understands the nuances of user preferences and available products. The final output would be a comprehensive and relevant set of grocery items tailored to the user's needs.
## Connect your application to graph logic
This code snippet defines an Express.js route that handles `POST` requests to the endpoint `/aiRivetSearch`. The route's purpose is to provide an AI-enhanced search functionality for a grocery shopping application, utilizing Rivet for graph operations and MongoDB for data retrieval.
```javascript
// Define a new POST endpoint for handling AI-enhanced search with Rivet
app.post('/aiRivetSearch', async (req, res) => {
// Connect to MongoDB using a custom function that handles the connection logic
db = await connectToDb();
// Extract the search query sent in the POST request body
const { query } = req.body;
// Logging the query and environment variables for debugging purposes
console.log(query);
console.log(process.env.GRAPH_ID);
console.log("Before running graph");
// Load the Rivet project graph from the filesystem to use for the search
const project = await loadProjectFromFile('./server/ai_shop.graph.rivet-project');
// Execute the loaded graph with the provided inputs and plugin settings
const response = await runGraph(project, {
graph: process.env.GRAPH_ID,
openAiKey: process.env.OPEN_AI_KEY,
inputs: {
input: {
type: "string",
value: query
}
},
pluginSettings: {
rivetPluginMongodb: {
mongoDBConnectionString: process.env.RIVET_MONGODB_CONNECTION_STRING,
}
}
});
// Parse the MongoDB aggregation pipeline from the graph response
const pipeline = JSON.parse(response.result.value);
// Connect to the 'products' collection in MongoDB and run the aggregation pipeline
const collection = db.collection('products');
const result = await collection.aggregate(pipeline).toArray();
// Send the search results back to the client along with additional context
res.json({
"result": result,
"searchList": response.list.value,
prompt: query,
pipeline: pipeline
});
});
```
Here’s a step-by-step explanation:
Endpoint initialization:
- An asynchronous POST route /aiRivetSearch is set up to handle incoming search queries.
MongoDB connection:
- The server establishes a connection to MongoDB using a custom connectToDb function. This function is presumably defined elsewhere in the codebase and handles the specifics of connecting to the MongoDB instance.
Request handling:
- The server extracts the query variable from the request's body. This query is the text input from the user, which will be used to perform the search.
Logging for debugging:
- The query and relevant environment variables, such as GRAPH_ID (which likely identifies the specific graph to be used within Rivet), are logged to the console. This is useful for debugging purposes, ensuring the server is receiving the correct inputs.
Graph loading and execution:
- The server loads a Rivet project graph from a file in the server's file system.
- Using Rivet's runGraph function, the loaded graph is executed with the provided inputs (the user's query) and plugin settings. The settings include the openAiKey and the MongoDB connection string from environment variables.
Response processing:
- The result of the graph execution is logged, and the server parses the MongoDB aggregation pipeline from the result. The pipeline defines a sequence of data aggregation operations to be performed on the MongoDB collection.
MongoDB aggregation:
- The server connects to the “products’ collection within MongoDB.
- It then runs the aggregation pipeline against the collection and waits for the results, converting the cursor returned by the aggregate function to an array with toArray().
Response generation:
- Finally, the server responds to the client's POST request with a JSON object. This object includes the results of the aggregation, the user's original search list, the prompt used for the search, and the aggregation pipeline itself. The inclusion of the prompt and pipeline in the response can be particularly helpful for front-end applications to display the query context or for debugging.
This code combines AI and database querying to create a powerful search tool within an application, giving the user relevant and personalized results based on their input.
This and other sample codes can be tested on our Github repository.
## Wrap-up: synergizing MongoDB with Rivet for innovative search solutions
The integration of MongoDB with Rivet presents a unique opportunity to build sophisticated search solutions that are both powerful and user-centric. MongoDB's flexible data model and powerful aggregation pipeline, combined with Rivet's ability to process and interpret complex data structures through graph operations, pave the way for creating dynamic, intelligent applications.
By harnessing the strengths of both MongoDB and Rivet, developers can construct advanced search capabilities that not only understand the intent behind user queries but also deliver personalized results efficiently. This synergy allows for the crafting of seamless experiences that can adapt to the evolving needs of users, leveraging the full spectrum of data interactions from input to insight.
As we conclude, it's clear that this fusion of database technology and graph processing can serve as a cornerstone for future software development — enabling the creation of applications that are more intuitive, responsive, and scalable. The potential for innovation in this space is vast, and the continued exploration of this integration will undoubtedly yield new methodologies for data management and user engagement.
Questions? Comments? Join us in the MongoDB Developer Community forum.
| md | {
"tags": [
"Atlas",
"JavaScript",
"AI",
"Node.js"
],
"pageDescription": "Join us in a journey through the convergence of database technology and AI in our article 'Building AI Graphs with Rivet and MongoDB Atlas Vector Search'. This guide offers a deep dive into the integration of Rivet AI's graph builder with MongoDB Atlas, showcasing how to visualize and manipulate data for AI applications. Whether you're a developer or a data scientist, this article provides valuable insights and practical steps for enhancing data-driven decision-making and creating dynamic, AI-powered solutions.",
"contentType": "Tutorial"
} | Building AI Graphs with Rivet and MongoDB Atlas Vector Search to Power AI Applications | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/interactive-rag-mongodb-atlas-function-calling-api | created | # Interactive RAG with MongoDB Atlas + Function Calling API
## Introduction: Unveiling the Power of Interactive Knowledge Discovery
Imagine yourself as a detective investigating a complex case. Traditional retrieval-augmented generation (RAG) acts as your static assistant, meticulously sifting through mountains of evidence based on a pre-defined strategy. While helpful, this approach lacks the flexibility needed for today's ever-changing digital landscape.
Enter interactive RAG – the next generation of information access. It empowers users to become active knowledge investigators by:
* **Dynamically adjusting retrieval strategies:** Tailor the search to your specific needs by fine-tuning parameters like the number of sources, chunk size, and retrieval algorithms.
* **Staying ahead of the curve:** As new information emerges, readily incorporate it into your retrieval strategy to stay up-to-date and relevant.
* **Enhancing LLM performance:** Optimize the LLM's workload by dynamically adjusting the information flow, leading to faster and more accurate analysis.
Before you continue, make sure you understand the basics of:
- LLMs.
- RAG.
- Using a vector database.
_)
## Optimizing your retrieval strategy: static vs. interactive RAG
Choosing between static and interactive retrieval-augmented generation approaches is crucial for optimizing your application's retrieval strategy. Each approach offers unique advantages and disadvantages, tailored to specific use cases:
**Static RAG:** A static RAG approach is pre-trained on a fixed knowledge base, meaning the information it can access and utilize is predetermined and unchanging. This allows for faster inference times and lower computational costs, making it ideal for applications requiring real-time responses, such as chatbots and virtual assistants.
**Pros:**
* **Faster response:** Pre-loaded knowledge bases enable rapid inference, ideal for real-time applications like chatbots and virtual assistants.
* **Lower cost:** Static RAG requires fewer resources for training and maintenance, making it suitable for resource-constrained environments.
* **Controlled content:** Developers have complete control over the model's knowledge base, ensuring targeted and curated responses in sensitive applications.
* **Consistent results:** Static RAG provides stable outputs even when underlying data changes, ensuring reliability in data-intensive scenarios.
**Cons:**
* **Limited knowledge:** Static RAG is confined to its pre-loaded knowledge, limiting its versatility compared to interactive RAG accessing external data.
* **Outdated information:** Static knowledge bases can become outdated, leading to inaccurate or irrelevant responses if not frequently updated.
* **Less adaptable:** Static RAG can struggle to adapt to changing user needs and preferences, limiting its ability to provide personalized or context-aware responses.
**Interactive RAG:** An interactive RAG approach is trained on a dynamic knowledge base, allowing it to access and process real-time information from external sources such as online databases and APIs. This enables it to provide up-to-date and relevant responses, making it suitable for applications requiring access to constantly changing data.
**Pros:**
* **Up-to-date information:** Interactive RAG can access and process real-time external information, ensuring current and relevant responses, which is particularly valuable for applications requiring access to frequently changing data.
* **Greater flexibility:** Interactive RAG can adapt to user needs and preferences by incorporating feedback and interactions into their responses, enabling personalized and context-aware experiences.
* **Vast knowledge base:** Access to external information provides an almost limitless knowledge pool, allowing interactive RAG to address a wider range of queries and deliver comprehensive and informative responses.
**Cons:**
* **Slower response:** Processing external information increases inference time, potentially hindering real-time applications.
* **Higher cost:** Interactive RAG requires more computational resources, making it potentially unsuitable for resource-constrained environments.
* **Bias risk:** External information sources may contain biases or inaccuracies, leading to biased or misleading responses if not carefully mitigated.
* **Security concerns:** Accessing external sources introduces potential data security risks, requiring robust security measures to protect sensitive information.
### Choosing the right approach
While this tutorial focuses specifically on interactive RAG, the optimal approach depends on your application's specific needs and constraints. Consider:
* **Data size and update frequency:** Static models are suitable for static or infrequently changing data, while interactive RAG is necessary for frequently changing data.
* **Real-time requirements:** Choose static RAG for applications requiring fast response times. For less critical applications, interactive RAG may be preferred.
* **Computational resources:** Evaluate your available resources when choosing between static and interactive approaches.
* **Data privacy and security:** Ensure your chosen approach adheres to all relevant data privacy and security regulations.
## Chunking: a hidden hero in the rise of GenAI
Now, let's put our detective hat back on. If you have a mountain of evidence available for a particular case, you wouldn't try to analyze every piece of evidence at once, right? You'd break it down into smaller, more manageable pieces — documents, witness statements, physical objects — and examine each one carefully. In the world of large language models, this process of breaking down information is called _chunking_, and it plays a crucial role in unlocking the full potential of retrieval-augmented generation.
Just like a detective, an LLM can't process a mountain of information all at once. Chunking helps it break down text into smaller, more digestible pieces called _chunks_. Think of these chunks as bite-sized pieces of knowledge that the LLM can easily analyze and understand. This allows the LLM to focus on specific sections of the text, extract relevant information, and generate more accurate and insightful responses.
However, the size of each chunk isn't just about convenience for the LLM; it also significantly impacts the _retrieval vector relevance score_, a key metric in evaluating the effectiveness of chunking strategies. The process involves converting text to vectors, measuring the distance between them, utilizing ANN/KNN algorithms, and calculating a score for the generated vectors.
Here is an example: Imagine asking "What is a mango?" and the LLM dives into its knowledge base, encountering these chunks:
**High scores:**
* **Chunk:** "Mango is a tropical stone fruit with a sweet, juicy flesh and a single pit." (Score: 0.98)
* **Chunk:** "In India, mangoes are revered as the 'King of Fruits' and hold cultural significance." (Score: 0.92)
* **Chunk:** "The mango season brings joy and delicious treats like mango lassi and mango ice cream." (Score: 0.85)
These chunks directly address the question, providing relevant information about the fruit's characteristics, cultural importance, and culinary uses. High scores reflect their direct contribution to answering your query.
**Low scores:**
* **Chunk:** "Volcanoes spew molten lava and ash, causing destruction and reshaping landscapes." (Score: 0.21)
* **Chunk:** "The stock market fluctuates wildly, driven by economic factors and investor sentiment." (Score: 0.42)
* **Chunk:** "Mitochondria, the 'powerhouses of the cell,' generate energy for cellular processes." (Score: 0.55)
These chunks, despite containing interesting information, are completely unrelated to mangoes. They address entirely different topics, earning low scores due to their lack of relevance to the query.
Check out ChunkViz v0.1 to get a feel for how chunk size (character length) breaks down text.
stands out for GenAI applications. Imagine MongoDB as a delicious cake you can both bake and eat. Not only does it offer the familiar features of MongoDB, but it also lets you store and perform mathematical operations on your vector embeddings directly within the platform. This eliminates the need for separate tools and streamlines the entire process.
By leveraging the combined power of function calling API and MongoDB Atlas, you can streamline your content ingestion process and unlock the full potential of vector embeddings for your GenAI applications.
, OpenAI or Hugging Face.
```python
# Chunk Ingest Strategy
self.text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size=4000, # THIS CHUNK SIZE IS FIXED - INGEST CHUNK SIZE DOES NOT CHANGE
chunk_overlap=200, # CHUNK OVERLAP IS FIXED
length_function=len,
add_start_index=True,
)
# load data from webpages using Playwright. One document will be created for each webpage
# split the documents using a text splitter to create "chunks"
loader = PlaywrightURLLoader(urls=urls, remove_selectors="header", "footer"])
documents = loader.load_and_split(self.text_splitter)
self.index.add_documents(
documents
)
```
2. **Vector index**: When employing vector search, it's necessary to [create a search index. This process entails setting up the vector path, aligning the dimensions with your chosen model, and selecting a vector function for searching the top K-nearest neighbors.
```python
{
"name": "",
"type": "vectorSearch",
"fields":
{
"type": "vector",
"path": ,
"numDimensions": ,
"similarity": "euclidean | cosine | dotProduct"
},
...
]
}
```
3. **Chunk retrieval**: Once the vector embeddings are indexed, an aggregation pipeline can be created on your embedded vector data to execute queries and retrieve results. This is accomplished using the [$vectorSearch operator, a new aggregation stage in Atlas.
```python
def recall(self, text, n_docs=2, min_rel_score=0.25, chunk_max_length=800,unique=True):
#$vectorSearch
print("recall=>"+str(text))
response = self.collection.aggregate(
{
"$vectorSearch": {
"index": "default",
"queryVector": self.gpt4all_embd.embed_query(text), #GPT4AllEmbeddings()
"path": "embedding",
#"filter": {},
"limit": 15, #Number (of type int only) of documents to return in the results. Value can't exceed the value of numCandidates.
"numCandidates": 50 #Number of nearest neighbors to use during the search. You can't specify a number less than the number of documents to return (limit).
}
},
{
"$addFields":
{
"score": {
"$meta": "vectorSearchScore"
}
}
},
{
"$match": {
"score": {
"$gte": min_rel_score
}
}
},{"$project":{"score":1,"_id":0, "source":1, "text":1}}])
tmp_docs = []
str_response = []
for d in response:
if len(tmp_docs) == n_docs:
break
if unique and d["source"] in tmp_docs:
continue
tmp_docs.append(d["source"])
str_response.append({"URL":d["source"],"content":d["text"][:chunk_max_length],"score":d["score"]})
kb_output = f"Knowledgebase Results[{len(tmp_docs)}]:\n```{str(str_response)}```\n## \n```SOURCES: "+str(tmp_docs)+"```\n\n"
self.st.write(kb_output)
return str(kb_output)
```
In this tutorial, we will mainly be focusing on the **CHUNK RETRIEVAL** strategy using the function calling API of LLMs and MongoDB Atlas as our **[data platform**.
## Key features of MongoDB Atlas
MongoDB Atlas offers a robust vector search platform with several key features, including:
1. **$vectorSearch operator:**
This powerful aggregation pipeline operator allows you to search for documents based on their vector embeddings. You can specify the index to search, the query vector, and the similarity metric to use. $vectorSearch provides efficient and scalable search capabilities for vector data.
2. **Flexible filtering:**
You can combine $vectorSearch with other aggregation pipeline operators like $match, $sort, and $limit to filter and refine your search results. This allows you to find the most relevant documents based on both their vector embeddings and other criteria.
3. **Support for various similarity metrics:**
MongoDB Atlas supports different similarity metrics like cosine similarity and euclidean distance, allowing you to choose the best measure for your specific data and task.
4. **High performance:**
The vector search engine in MongoDB Atlas is optimized for large datasets and high query volumes, ensuring efficient and responsive search experiences.
5. **Scalability:**
MongoDB Atlas scales seamlessly to meet your growing needs, allowing you to handle increasing data volumes and query workloads effectively.
**Additionally, MongoDB Atlas offers several features relevant to its platform capabilities:**
* **Global availability:**
Your data is stored in multiple data centers around the world, ensuring high availability and disaster recovery.
* **Security:**
MongoDB Atlas provides robust security features, including encryption at rest and in transit, access control, and data audit logging.
* **Monitoring and alerting:**
MongoDB Atlas provides comprehensive monitoring and alerting features to help you track your cluster's performance and identify potential issues.
* **Developer tools:**
MongoDB Atlas offers various developer tools and APIs to simplify development and integration with your applications.
## OpenAI function calling:
OpenAI's function calling is a powerful capability that enables users to seamlessly interact with OpenAI models, such as GPT-3.5, through programmable commands. This functionality allows developers and enthusiasts to harness the language model's vast knowledge and natural language understanding by incorporating it directly into their applications or scripts. Through function calling, users can make specific requests to the model, providing input parameters and receiving tailored responses. This not only facilitates more precise and targeted interactions but also opens up a world of possibilities for creating dynamic, context-aware applications that leverage the extensive linguistic capabilities of OpenAI's models. Whether for content generation, language translation, or problem-solving, OpenAI function calling offers a flexible and efficient way to integrate cutting-edge language processing into various domains.
## Key features of OpenAI function calling:
- Function calling allows you to connect large language models to external tools.
- The Chat Completions API generates JSON that can be used to call functions in your code.
- The latest models have been trained to detect when a function should be called and respond with JSON that adheres to the function signature.
- Building user confirmation flows is recommended before taking actions that impact the world on behalf of users.
- Function calling can be used to create assistants that answer questions by calling external APIs, convert natural language into API calls, and extract structured data from text.
- The basic sequence of steps for function calling involves calling the model, parsing the JSON response, calling the function with the provided arguments, and summarizing the results back to the user.
- Function calling is supported by specific model versions, including GPT-4 and GPT-3.5-turbo.
- Parallel function calling allows multiple function calls to be performed together, reducing round-trips with the API.
- Tokens are used to inject functions into the system message and count against the model's context limit and billing.
.
## Function calling API basics: actions
Actions are functions that an agent can invoke. There are two important design considerations around actions:
* Giving the agent access to the right actions
* Describing the actions in a way that is most helpful to the agent
## Crafting actions for effective agents
**Actions are the lifeblood of an agent's decision-making.** They define the options available to the agent and shape its interactions with the environment. Consequently, designing effective actions is crucial for building successful agents.
Two key considerations guide this design process:
1. **Access to relevant actions:** Ensure the agent has access to actions necessary to achieve its objectives. Omitting critical actions limits the agent's capabilities and hinders its performance.
2. **Action description clarity:** Describe actions in a way that is informative and unambiguous for the agent. Vague or incomplete descriptions can lead to misinterpretations and suboptimal decisions.
By carefully designing actions that are both accessible and well-defined, you equip your agent with the tools and knowledge necessary to navigate its environment and achieve its objectives.
Further considerations:
* **Granularity of actions:** Should actions be high-level or low-level? High-level actions offer greater flexibility but require more decision-making, while low-level actions offer more control but limit adaptability.
* **Action preconditions and effects:** Clearly define the conditions under which an action can be taken and its potential consequences. This helps the agent understand the implications of its choices.
If you don't give the agent the right actions and describe them in an effective way, you won’t be able to build a working agent.
_)
An LLM is then called, resulting in either a response to the user or action(s) to be taken. If it is determined that a response is required, then that is passed to the user, and that cycle is finished. If it is determined that an action is required, that action is then taken, and an observation (action result) is made. That action and corresponding observation are added back to the prompt (we call this an “agent scratchpad”), and the loop resets — i.e., the LLM is called again (with the updated agent scratchpad).
## Getting started
Clone the demo Github repository.
```bash
git clone git@github.com:ranfysvalle02/Interactive-RAG.git
```
Create a new Python environment.
```bash
python3 -m venv env
```
Activate the new Python environment.
```bash
source env/bin/activate
```
Install the requirements.
```bash
pip3 install -r requirements.txt
```
Set the parameters in params.py:
```bash
# MongoDB
MONGODB_URI = ""
DATABASE_NAME = "genai"
COLLECTION_NAME = "rag"
# If using OpenAI
OPENAI_API_KEY = ""
# If using Azure OpenAI
#OPENAI_TYPE = "azure"
#OPENAI_API_VERSION = "2023-10-01-preview"
#OPENAI_AZURE_ENDPOINT = "https://.openai.azure.com/"
#OPENAI_AZURE_DEPLOYMENT = ""
```
Create a Search index with the following definition:
```JSON
{
"type": "vectorSearch",
"fields":
{
"numDimensions": 384,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}
]
}
```
Set the environment.
```bash
export OPENAI_API_KEY=
```
To run the RAG application:
```bash
env/bin/streamlit run rag/app.py
```
Log information generated by the application will be appended to app.log.
## Usage
This bot supports the following actions: answering questions, searching the web, reading URLs, removing sources, listing all sources, viewing messages, and resetting messages.
It also supports an action called iRAG that lets you dynamically control your agent's RAG strategy.
Ex: "set RAG config to 3 sources and chunk size 1250" => New RAG config:{'num_sources': 3, 'source_chunk_size': 1250, 'min_rel_score': 0, 'unique': True}.
If the bot is unable to provide an answer to the question from data stored in the Atlas Vector store and your RAG strategy (number of sources, chunk size, min_rel_score, etc), it will initiate a web search to find relevant information. You can then instruct the bot to read and learn from those results.
## Demo
Let's start by asking our agent a question — in this case, "What is a mango?" The first thing that will happen is it will try to "recall" any relevant information using vector embedding similarity. It will then formulate a response with the content it "recalled" or will perform a web search. Since our knowledge base is currently empty, we need to add some sources before it can formulate a response.
![DEMO - Ask a Question][7]
Since the bot is unable to provide an answer using the content in the vector database, it initiated a Google search to find relevant information. We can now tell it which sources it should "learn." In this case, we'll tell it to learn the first two sources from the search results.
![DEMO - Add a source][8]
## Change RAG strategy
Next, let's modify the RAG strategy! Let's make it only use one source and have it use a small chunk size of 500 characters.
![DEMO - Change RAG strategy part 1][9]
Notice that though it was able to retrieve a chunk with a fairly high relevance score, it was not able to generate a response because the chunk size was too small and the chunk content was not relevant enough to formulate a response. Since it could not generate a response with the small chunk, it performed a web search on the user's behalf.
Let's see what happens if we increase the chunk size to 3,000 characters instead of 500.
![DEMO - Change RAG strategy part 2][10]
Now, with a larger chunk size, it was able to accurately formulate the response using the knowledge from the vector database!
## List all sources
Let's see what's available in the knowledge base of the agent by asking it, “What sources do you have in your knowledge base?”
![DEMO - List all sources][11]
## Remove a source of information
If you want to remove a specific resource, you could do something like:
```
USER: remove source 'https://www.oracle.com' from the knowledge base
```
To remove all the sources in the collection, we could do something like:
![DEMO - Remove ALL sources][12]
This demo has provided a glimpse into the inner workings of our AI agent, showcasing its ability to learn and respond to user queries in an interactive manner. We've witnessed how it seamlessly combines its internal knowledge base with real-time web search to deliver comprehensive and accurate information. The potential of this technology is vast, extending far beyond simple question-answering. None of this would be possible without the magic of the function calling API.
## Embracing the future of information access with interactive RAG
This post has explored the exciting potential of interactive retrievalaugmented generation (RAG) with the powerful combination of MongoDB Atlas and function calling API. We've delved into the crucial role of chunking, embedding, and retrieval vector relevance score in optimizing RAG performance, unlocking its true potential for information retrieval and knowledge management.
Interactive RAG, powered by the combined forces of MongoDB Atlas and function calling API, represents a significant leap forward in the realm of information retrieval and knowledge management. By enabling dynamic adjustment of the RAG strategy and seamless integration with external tools, it empowers users to harness the full potential of LLMs for a truly interactive and personalized experience.
Intrigued by the possibilities? Explore the full source code for the interactive RAG application and unleash the power of RAG with MongoDB Atlas and function calling API in your own projects!
Together, let's unlock the transformative potential of this potent combination and forge a future where information is effortlessly accessible and knowledge is readily available to all.
View is the [full source code for the interactive RAG application using MongoDB Atlas and function calling API.
### Additional MongoDB Resources
- RAG with Atlas Vector Search, LangChain, and OpenAI
- Taking RAG to Production with the MongoDB Documentation AI Chatbot
- What is Artificial Intelligence (AI)?
- Unlock the Power of Semantic Search with MongoDB Atlas Vector Search
- Machine Learning in Healthcare:
Real-World Use Cases and What You Need to Get Started
- What is Generative AI?
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1c80f212af2260c7/6584ad159fa6cfce2b287389/interactive-rag-1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56fc9b71e3531a49/6584ad51a8ee4354d2198048/interactive-rag-2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte74948e1721bdaec/6584ad51dc76626b2c7e977f/interactive-rag-3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd026c60b753c27e3/6584ad51b0fbcbe79962669b/interactive-rag-4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8e9d94c7162ff93e/6584ad501f8952b2ab911de9/interactive-rag-5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta75a002d93bb01e6/6584ad50c4b62033affb624e/interactive-rag-6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltae07f2c87cf53157/6584ad50b782f0967d583f29/interactive-rag-7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c2eb90c4f462888/6584ad503ea3616a585750cd/interactive-rag-8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt227dec581a8ec159/6584ad50bb2e10e5fb00f92d/interactive-rag-9.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd451cf7915c08958/6584ad503ea36155675750c9/interactive-rag-10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18aff0657fb9b496/6584ad509fa6cf3cca28738e/interactive-rag-11.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcd723718e3fb583f/6584ad4f0543c5e8fe8f0ef6/interactive-rag-12.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "Explore the cutting-edge of knowledge discovery with Interactive Retrieval-Augmented Generation (RAG) using MongoDB Atlas and Function Calling API. Learn how dynamic retrieval strategies, enhanced LLM performance, and real-time data integration can revolutionize your digital investigations. Dive into practical examples, benefits, and the future of interactive RAG in our in-depth guide. Perfect for developers and AI enthusiasts seeking to leverage advanced information access and management techniques.",
"contentType": "Tutorial"
} | Interactive RAG with MongoDB Atlas + Function Calling API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-federation-azure | created | # Atlas Data Federation with Azure Blob Storage
For as long as you have been reviewing restaurants, you've been storing your data in MongoDB. The plethora of data you've gathered is so substantial, you decide to team up with your friends to host this data online, so other restaurant goers can decide where to eat, informed by your detailed insights. But your friend has been storing their data in Azure Blob storage. They use JSON now, but they have reviews upon reviews stored as `.csv` files. How can we get all this data pooled together without the often arduous process of migrating databases or transforming data? With MongoDB's Data Federation, you can combine all your data into one unified view, allowing you to easily search for the best French diner in your borough.
This tutorial will walk you through the steps of combining your MongoDB database with your Azure Blob storage, utilizing MongoDB's Data Federation.
## Prerequisites
Before you begin, you'll need a few prerequisites to follow along with this tutorial, including:
- A MongoDB Atlas account, if you don't have one already
- A Microsoft Azure account with a storage account and container setup. If you don't have this, follow the steps in the Microsoft documentation for the storage account and the container.
- Azure CLI, or you can install Azure PowerShell, but this tutorial uses Azure CLI. Sign in and configure your command line tool following the steps in the documentation for Azure CLI and Azure PowerShell.
- Node.js 18 or higher and npm: Make sure you have Node.js and npm (Node.js package manager) installed. Node.js is the runtime environment required to run your JavaScript code server-side. npm is used to manage the dependencies.
### Add your sample data
To have something to view when your data stores are connected, let's add some reviews to your blob. First, you'll add a review for a new restaurant you just reviewed in Manhattan. Create a file called example1.json, and copy in the following:
```json
{
"address":{
"building":"518",
"coord":
{
"$numberDouble":"-74.006220"
},
{
"$numberDouble":"40.733740"
}
],
"street":"Hudson Street",
"zipcode":"10014"
},
"borough":"Manhattan",
"cuisine": [
"French",
"Filipino"
],
"grades":[
{
"date":{
"$date":{
"$numberLong":"1705403605904"
}
},
"grade":"A",
"score":{
"$numberInt":"12"
}
}
],
"name":"Justine's on Hudson",
"restaurant_id":"40356020"
}
```
Upload this file as a blob to your container:
```bash
az storage blob upload --account-name --container-name --name --file
```
Here, `BlobName` is the name you want to assign to your blob (just use the same name as the file), and `PathToFile` is the path to the file you want to upload (example1.json).
But you're not just restricted to JSON in your federated database. You're going to create another file, called example2.csv. Copy the following data into the file:
```csv
Restaurant ID,Name,Cuisine,Address,Borough,Latitude,Longitude,Grade Date,Grade,Score
40356030,Sardi's,Continental,"234 W 44th St, 10036",Manhattan,40.757800,-73.987500,1927-09-09,A,11
```
Load example2.csv to your blob using the same command as above.
You can list the blobs in your container to verify that your file was uploaded:
```bash
az storage blob list --account-name --container-name --output table
```
## Connect your databases using Data Federation
The first steps will be getting your MongoDB cluster set up. For this tutorial, you're going to create a [free M0 cluster. Once this is created, click "Load Sample Dataset." In the sample dataset, you'll see a database called `sample_restaurants` with a collection called `restaurants`, containing thousands of restaurants with reviews. This is the collection you'll focus on.
Now that you have your Azure Storage and MongoDB cluster setup, you are ready to deploy your federated database instance.
1. Select "Data Federation" from the left-hand navigation menu.
2. Click "Create New Federated Database" and, from the dropdown, select "Set up manually."
3. Choose Azure as your cloud provider and give your federate database instance a name.
, where you’ll find a whole variety of tutorials, or explore MongoDB with other languages.
Before you start, make sure you have Node.js installed in your environment.
1. Set up a new Node.js project:
- Create a new directory for your project.
- Initialize a new Node.js project by running `npm init -y` in your terminal within that directory.
- Install the MongoDB Node.js driver by running `npm install mongodb`.
2. Create a JavaScript file:
- Create a file named searchApp.js in your project directory.
3. Implement the application:
- Edit searchApp.js to include the following code, which connects to your MongoDB database and creates a client.
```
const { MongoClient } = require('mongodb');
// Connection URL
const url = 'yourConnectionString';
// Database Name
const dbName = 'yourDatabaseName';
// Collection Name
const collectionName = 'yourCollectionName';
// Create a new MongoClient
const client = new MongoClient(url);
```
- Now, create a function called `searchDatabase` that takes an input string and field from the command line and searches for documents containing that string in the specified field.
```
// Function to search for a string in the database
async function searchDatabase(fieldName, searchString) {
try {
await client.connect();
console.log('Connected successfully to server');
const db = client.db(dbName);
const collection = db.collection(collectionName);
// Dynamic query based on field name
const query = { fieldName]: { $regex: searchString, $options: "i" } };
const foundDocuments = await collection.find(query).toArray();
console.log('Found documents:', foundDocuments);
} finally {
await client.close();
}
}
```
- Lastly, create a main function to control the flow of the application.
```
// Main function to control the flow
async function main() {
// Input from command line arguments
const fieldName = process.argv[2];
const searchString = process.argv[3];
if (!fieldName || !searchString) {
console.log('Please provide both a field name and a search string as arguments.');
return;
}
searchStringInDatabase(fieldName, searchString)
.catch(console.error);
}
main().catch(console.error);
```
4. Run your application with `node searchApp.js fieldName "searchString"`.
- The script expects two command line arguments: the field name and the search string. It constructs a dynamic query object using these arguments, where the field name is determined by the first argument, and the search string is used to create a regex query.
In the terminal, you can type the query `node searchApp.js "Restaurant ID" "40356030"` to find your `example2.csv` file as if it was stored in a MongoDB database. Or maybe `node searchApp.js borough "Manhattan"`, to find all restaurants in your virtual database (across all your databases) in Manhattan. You're not just limited to simple queries. Most operators and aggregations are available on your federated database. There are some limitations and variations in the MongoDB Operators and Aggregation Pipeline Stages on your federated database that you can read about in our [documentation.
## Conclusion
By following the steps outlined, you've learned how to set up Azure Blob storage, upload diverse data formats like JSON and CSV, and connect these with your MongoDB dataset using a federated database.
This tutorial highlights the potential of data federation in breaking down data silos, promoting data interoperability, and enhancing the overall data analysis experience. Whether you're a restaurant reviewer looking to share insights or a business seeking to unify disparate data sources, MongoDB's Data Federation along with Azure Blob storage provides a robust, scalable, and user-friendly platform to meet your data integration needs.
Are you ready to start building with Atlas on Azure? Get started for free today with MongoDB Atlas on Azure Marketplace. If you found this tutorial useful, make sure to check out some more of our articles in Developer Center, like MongoDB Provider for EF Core Tutorial. Or pop over to our Community Forums to see what other people in the community are building!
---
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8526ba2a8dccdc22/65df43a9747141e57e0a356f/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt12c4748f967ddede/65df43a837599950d070b53f/image1.png | md | {
"tags": [
"Atlas",
"JavaScript",
"Azure"
],
"pageDescription": "A tutorial to guide you through integrating your Azure storage with MongoDB using Data Federation",
"contentType": "Tutorial"
} | Atlas Data Federation with Azure Blob Storage | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-partitioning-strategies | created | # Realm Partitioning Strategies
Realm partitioning can be used to control what data is synced to each mobile device, ensuring that your app is efficient, performant, and secure. This article will help you pick the right partitioning strategy for your app.
MongoDB Realm Sync stores the superset of your application data in the cloud using MongoDB Atlas. The simplest strategy is that every instance of your mobile app contains the full database, but that quickly consumes a lot of space on the users' devices and makes the app slow to start while it syncs all of the data for the first time. Alternative strategies include partitioning by:
- User
- Group/team/store
- Chanel/room/topic
- Geographic region
- Bucket of time
- Any combination of these
This article covers:
- Introduction to MongoDB Realm Sync Partitioning
- Choosing the Right Strategy(ies) for Your App
- Setting Up Partitions in the Backend Realm App
- Accessing Realm Partitions from Your Mobile App (iOS or Android)
- Resources
- Summary
## Prerequisites
The first part of the article has no prerequisites.
The second half shows how to set up partitioning and open Realms for a partition. If you want to try this in your own apps and you haven't worked with Realm before, then it would be helpful to try a tutorial for iOS or Android first.
## Introduction to MongoDB Realm Sync Partitioning
MongoDB Realm Sync lets a "user" access their application data from multiple mobile devices, whether they're online or disconnected from the internet. The data for all users is stored in MongoDB Atlas. When a user is logged into a device and has a network connection, the data they care about (what that means and how you control it is the subject of this article) is synchronized. When the device is offline, changes are stored locally and then synced when it's back online.
There may be cases where all data should be made available to all users, but I'd argue that it's rare that there isn't at least some data that shouldn't be universally shared. E.g., in a news app, the user may select which topics they want to follow, and set flags to indicate which articles they've already read—that data shouldn't be seen by others.
>In this article, I'm going to refer to "users", but for some apps, you could substitute in "store," "meeting room," "device," "location," ...
**Why bother limiting what data is synced to a mobile app?** There are a couple of reasons:
- Capacity: Why waste limited resources on a mobile device to store data that the user has no interest in?
- Security: If a user isn't entitled to see a piece of data, it's safest not to store it on their device.
The easiest way to understand how partitions work in MongoDB Realm Sync is to look at an example.
MongoDB Realm Sync Partitions
This example app works with shapes. The mobile app defines classes for circles, stars and triangles. In Atlas, each type of shape is stored in a distinct collection (`circles`, `stars` and `triangles`). Each of the shapes (regardless of which of the collections it's stored in) has a `color` attribute.
When using the mobile app, the user is interested in working with a color. It could be that the user is only allowed to work with a single color, or it could be that the user can pick what color they currently want to work with. The backend Realm app gets to control which colors a given user is permitted to access.
The developer implements this by designating the `color` attribute as the partition key.
A view in the mobile app can then open a synced Realm by specifying the color it wants to work with. The backend Realm app will then sync all shapes of that color to the mobile Realm, or it will reject the request if the user doesn't have permission to access that partition.
There are some constraints on the partition key:
- The application must provide an exact match. It can specify that the Realm it's opening should contain the *blue* colored shapes, or that it should contain the *green* shapes. The app cannot open a synced Realm that contains both the *red* and *green* shapes.
- The app must specify an exact match for the partition key. It cannot open a synced Realm for a range or pattern of partition key values. E.g. it can't specify "all colors except *red*" or "all dates in the last week".
- Every collection must use the same partition key. In other words, you can't use `color` as the partition key for collections in the `shapes` database and `username` for collections in the `user` database. You'll see later that there's a technique to work around this.
- You **can** change the value of the partition key (convert a `red` triangle into a `green` triangle), but it's inefficient as it results in the existing document being deleted and a new one being inserted.
- The partition key must be one of these types:
- `String`
- `ObjectID`
- `Int`
- `Long`
The mobile app can ask to open a Realm using any value for the partition key, but it might be that the user isn't allowed access to that partition. For security, that check is performed in the backend Realm application. The developer can provide rules to decide if a user can access a partition, and the decision could be any one of:
- No.
- Yes, but only for reads.
- Yes, for both reads and writes.
The permission rules can be anything from a simple expression that matches the partition key value, to a complex function that cross-references other collections.
In reality, the rules don't need to be based on the user. For example, the developer could decide that the "happy hour" chat room (partition) can only be opened on Fridays.
## Choosing the Right Strategy(ies) for Your App
This section takes a deeper look at some of the partitioning strategies that you can adopt (or that may inspire you to create a bespoke approach). As you read through these strategies, remember that you can combine them within a single app. This is the meta-strategy we'll look at last.
### Firehose
This is the simplest strategy. All of the documents/objects are synced to every instance of the app. This is a decision **not** to partition the data.
You might adopt this strategy for an NFL (National Football League) scores app where you want everyone to be able to view every result from every game in history—even when the app is offline.
Consider the two main reasons for partitioning:
- **Capacity**: There have been less than 20,000 NFL games ever played, and the number is growing by less than 300 per year. The data for each game contains only the date, names of the two teams, and the score, and so the total volume of data is modest. It's reasonable to store all of this data on a mobile device.
- **Security/Privacy**: There's nothing private in this data, and so it's safe to allow anyone on any mobile device to view it. We don't allow the mobile app to make any changes to the data. These are simple Realm Sync rules to define in the backend Realm app.
Even though this strategy doesn't require partitioning, you must still designate a partition key when configuring Realm Sync. We want all of the documents/objects to be in the same partition and so we can add an attribute named `visible` and always set it to `true`.
### User
User-based partitioning is a common strategy. Each user has a unique ID (that can be automatically created by MongoDB Realm). Each document contains an attribute that identifies the user that owns it. This could be a username, email address, or the `Id` generated by MongoDB Realm when the user registers. That attribute is used as the partitioning key.
Use cases for this strategy include financial transactions, order history, game scores, and journal entries.
Consider the two main drivers for partitioning:
- **Capacity**: Only the data that's unique to the users is stored in the mobile app, which minimizes storage.
- **Security/Privacy**: Users only have access to their own data.
There is often a subset of the user's data that should be made available to team members or to all users. In such cases, you may break the data into multiple collections, perhaps duplicating some data, and using different partition key values for the documents in those collections. You can see an example of this with the `User` and `Chatster` collections in the Rchat app.
### Team
This strategy is used when you need to share data between a team of users. You can replace the term "team" with "agency," "store." or any other grouping of users or devices. Examples include all of the point-of-sale devices in a store or all of the employees in a department. The team's name or ID is used as the partitioning key and must be included in all documents in each synced collection.
The WildAid O-FISH App uses the agency name as the partition key. Each agency is the set of officers belonging to an organization responsible for enforcing regulations in one or more Marine Protected Areas. (You can think of an MPA as an ocean-based national park.) Every officer in an agency can create new reports and view all of the agency's existing reports. Agencies can customize the UI by controlling what options are offered when an officer creates a new report. E.g., an agency controlling the North Sea would include "cod" in the list of fish that could have been caught, but not "clownfish". The O-FISH menus are data-driven, with that data partitioned based on the agency.
- **Capacity**: The "team" strategy consumes more space on the mobile device than the "user" partitioning strategy, but it's a good fit when all members of the team need to access the data (even when offline).
- **Security/Privacy**: This strategy is used when all team members are allowed to view (and optionally modify) their team's data.
### Channel
With this strategy, a user is typically entitled to open/sync Realms from a choice of channels. For example, a sports news app might have channels for soccer, baseball, etc., a chat app would offer multiple chat rooms, and an issue tracker might partition based on product. The channel name or ID should be used as the partitioning key.
- **Capacity**: The mobile app can minimize storage use on the device by only opening a Realm for the partition representing the channel that the user is currently interacting with.
- **Security/Privacy**: Realm Sync permissions can be added so that a user can only open a synced Realm for a partition if they're entitled to. For example, this might be handled by storing an array of allowed channels as part of the user's data.
### Region
There are cases where you're only currently interested in data for a particular geographic area. Maps, cycle hire apps, and tourist guides are examples.
If you recall, when opening a Realm, the application must specify an exact match for the partition key, and that value needs to match the partition value in any document that is part of that partition. This restricts what you can do with location-based partitioning:
- You **can** open a partition containing all documents where `location` is set to `"London"`.
- You **can't** open a partition containing all documents where `location` is set to `"either London or South East England"`.
- The partition key can't be an array.
- You **can't** open a partition containing all documents where `location` is set to coordinates within a specified range.
The upshot of this is that you need to decide on geographic regions and assign them IDs or names. Each document can only belong to one of these regions. If you decided to use the state as your region, then the app can open a single synced Realm to access all of the data for Texas, but if the app wanted to be able to show data for all states in the US then it would need to open 50 synced Realms.
- **Capacity**: Storage efficiency is dependent on how well your choice of regions matches how the application needs to work with the data. For example, if your app only ever lets the user work with data for a single state, then it would waste a lot of storage if you used countries as your regions.
- **Security/Privacy**: In the cases that you want to control which users can access which region, Realm Sync permissions can be added.
In some cases, you may choose to duplicate some data in the backend (Atlas) database in order to optimise the frontend storage, where resources are more constrained. An analog is old-world (paper) travel guides. Lonely Planet produced a guide for Southeast Asia, in addition to individual guides for Vietnam, Thailand, Cambodia, etc. The guide for Cambodia contained 500 pages of information. Some of that same information (enough to fill 50 pages) was also printed in the Southeast Asia guide. The result was that the library of guides (think Atlas) contained duplicate information but it had plenty of space on its shelves. When I go on vacation, I could choose which region/partition I wanted to take with me in my small backpack (think mobile app). If I'm spending a month trekking around the whole of Southeast Asia, then I take that guide. If I'm spending the whole month in Vietnam, then I take that guide.
If you choose to duplicate data in multiple regions, then you can set up Atlas database triggers to automate the process.
### Time Bucket
As with location, it doesn't make sense to use the exact time as the partition key as you typically would want to open a synced Realm for a range of times. The result is that you'd typically use discrete time ranges for your partition key values. A compatible set of partition values is "Today," "Earlier this week," "This month (but not this week)," "Earlier this year (but not this month)," "2020," "2000-2019," and "Twentieth Century."
You can use Atlas scheduled and database triggers to automatically move documents between locations (e.g., at midnight, find all documents with `time == "Today"` and set `time = "Earlier this week"`. Note that changing the value of a partition key is expensive as it's implemented as a delete and insert.
- **Capacity**: Storage efficiency is dependent on how well your choice of time buckets matches how the application needs to work with the data. That probably sounds familiar—time bucket partitioning is analogous to region-based partitioning (with the exception that a city is unlikely to move from Florida to Alaska). As with regions, you may decide to duplicate some data—perhaps having two documents for today's data one with `time == "Today"` and the other with `time == "This week"`.
- **Security/Privacy**: In the cases that you want to control which users can access which time period, Realm Sync permissions can be added.
>Note that slight variations on the Region and Time Bucket strategies can be used whenever you need to partition on ranges—age, temperature, weight, exam score...
### Combination/Hybrid
For many applications, no single partitioning strategy that we've looked at meets all of its use cases.
Consider an eCommerce app. You might decide to have a single read-only partition for the entire product catalog. But, if the product catalog is very large, then you could choose to partition based on product categories (sporting good, electronics, etc.) to reduce storage size on the mobile device. When that user browses their order history, they shouldn't drag in orders for other users and so `user-id` would be a smart partitioning key. Unfortunately, the same key has to be used for every collection.
This can be solved by using `partition` as the partition key. `partition` is a `String` and its value is always made up of a key-value pair. In our eCommerce app, documents in the `productCatalog` collection could contain `partition: "category=sports"` and documents in the `orders` collection would include `partition: user=andrew@acme.com`.
When the application opens a synced Realm, it provides a value such as `"user=andrew@acme.com"` as the partition. The Realm sync rules can parse the value of the partition key to determine if the user is allowed to open that partition by splitting the key to find the sub-key (`user`) and its value (`andrew@acme.com`). The rule knows that when `key == "user"`, it needs to check that the current user's email address matches the value.
- **Capacity**: By using an optimal partitioning sub-strategy for each type of data, you can fine-tune what data is stored in the mobile app.
- **Security/Privacy**: Your backend Realm app can apply custom rules based on the `key` component of the partition to decide whether the user is allowed to sync the requested partition.
You can see an example of how this is implemented for a chatroom app in Building a Mobile Chat App Using Realm – Data Architecture.
## Setting Up Partitions in the Backend Realm App
You need to set up one backend Realm app, which can then be used by both your iOS and Android apps. You can also have multiple iOS and Android apps using the same back end.
### Set Partition and Enable MongoDB Realm Sync
From the Realm UI, select the "Sync" tab. From that view, you select whether you'd prefer to specify your schema through the back end or have it automatically derived from the Realm Objects that you define in your mobile app. If you don't already have data in your Atlas database, then I'd suggest the second option which turns on "Dev Mode," which is the quickest way to get started:
On the next screen, select your key, specify the attribute to use as the partition key (in this case, a new string attribute named "partition"), and the database. Click "Turn Dev Mode On":
Click on the "REVIEW & DEPLOY" button. You'll need to do this every time you change the Realm app, but this is the last time that I'll mention it:
Now that Realm sync has been enabled, you should ensure that you set the `partition` attribute in all documents in any collections to be synced.
### Sync Rules
Realm Sync rules control whether the user/app is permitted to sync a partition or not.
>A common misconception is that sync rules can control which documents within a partition will be synced. That isn't the case. They simply determine (true or false) whether the user is allowed to sync the entire partition.
The default behaviour is that the app can sync whichever partition it requests, and so you need to change the rules if you want to increase security/privacy—which you probably do before going into production!
To see or change the rules, select the "Configuration" tab and then expand the "Define Permissions" section:
Both the read and write rules default to `true`.
You should click "Pause Sync" before editing the rules and then re-enable sync afterwards.
The rules are JSON expressions that have access to the user object (`%%user`) and the requested partition (`%%partition`). If you're using the user ID as your partitioning key, then this rule would ensure that a user can only sync the partition containing their documents: `{ "%%user.id": "%%partition" }`.
For more complex partitioning schemes (e.g., the combination strategy), you can provide a JSON expression that delegates the `true`/`false` decision to a Realm function:
``` json
{
"%%true": {
"%function": {
"arguments":
"%%partition"
],
"name": "canReadPartition"
}
}
}
```
It's then your responsibility to create the `canReadPartition` function. Here's an example from the [Rchat app:
``` javascript
exports = function(partition) {
console.log(`Checking if can sync a read for partition = ${partition}`);
const db = context.services.get("mongodb-atlas").db("RChat");
const chatsterCollection = db.collection("Chatster");
const userCollection = db.collection("User");
const chatCollection = db.collection("ChatMessage");
const user = context.user;
let partitionKey = "";
let partitionVale = "";
const splitPartition = partition.split("=");
if (splitPartition.length == 2) {
partitionKey = splitPartition0];
partitionValue = splitPartition[1];
console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);
} else {
console.log(`Couldn't extract the partition key/value from ${partition}`);
return false;
}
switch (partitionKey) {
case "user":
console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) – ${partitionKey === user.id}`);
return partitionValue === user.id;
case "conversation":
console.log(`Looking up User document for _id = ${user.id}`);
return userCollection.findOne({ _id: user.id })
.then (userDoc => {
if (userDoc.conversations) {
let foundMatch = false;
userDoc.conversations.forEach( conversation => {
console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)
if (conversation.id === partitionValue) {
console.log(`Found matching conversation element for id = ${partitionValue}`);
foundMatch = true;
}
});
if (foundMatch) {
console.log(`Found Match`);
return true;
} else {
console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);
return false;
}
} else {
console.log(`No conversations attribute in User doc`);
return false;
}
}, error => {
console.log(`Unable to read User document: ${error}`);
return false;
});
case "all-users":
console.log(`Any user can read all-users partitions`);
return true;
default:
console.log(`Unexpected partition key: ${partitionKey}`);
return false;
}
};
```
The function splits the partition string, taking the key from the left of the `=` symbol and the value from the right side. It then runs a specific check based on the key:
- `user`: Checks that the value matches the current user's ID.
- `conversation`: This is used for the chat messages. Checks that the value matches one of the conversations stored in the user's document (i.e. that the current user is a member of the chat room.)
- `all-users`: This is used for the `Chatster` collection which provides a read-only view of a subset of each user's data, such as their name and presence state. This data is readable by anyone and so the function always returns true.
RChat also has a `canWritePartition` function which has a similar structure but applies different checks. You can [view that function here.
### Triggers
MongoDB Realm provides three types of triggers:
- **Authentication**: Often used to create a user document when a new user registers.
- **Database**: Invoked when your nominated collection is updated. You can use database triggers to automate the duplication of data so that it can be shared through a different partition.
- **Scheduled**: Similar to a `cron` job, scheduled triggers run at a specified time or interval. They can be used to move documents into different time buckets (e.g., from "Today" into "Earlier this week").
In the RChat app, only the owner is allowed to read or write their `User` document, but we want the user to be discoverable by anyone and for their presence state to be visible to others. We add a database trigger that mirrors a subset of the `User` document to a `Chatster` document which is in a publicly visible partition.
The first step is to create a database trigger by selecting "Triggers" and then clicking "Add a Trigger":
Fill in the details about the collection that invokes the new trigger, specify which operations we care about (all of them), and then indicate that we'll provide a new function to be executed when the trigger fires:
After saving that definition, you're taken to the function editor to add the logic. This is the code for the trigger on the `User` collection:
``` javascript
exports = function(changeEvent) {
const db = context.services.get("mongodb-atlas").db("RChat");
const chatster = db.collection("Chatster");
const userCollection = db.collection("User");
let eventCollection = context.services.get("mongodb-atlas").db("RChat").collection("Event");
const docId = changeEvent.documentKey._id;
const user = changeEvent.fullDocument;
let conversationsChanged = false;
console.log(`Mirroring user for docId=${docId}. operationType = ${changeEvent.operationType}`);
switch (changeEvent.operationType) {
case "insert":
case "replace":
case "update":
console.log(`Writing data for ${user.userName}`);
let chatsterDoc = {
_id: user._id,
partition: "all-users=all-the-users",
userName: user.userName,
lastSeenAt: user.lastSeenAt,
presence: user.presence
};
if (user.userPreferences) {
const prefs = user.userPreferences;
chatsterDoc.displayName = prefs.displayName;
if (prefs.avatarImage && prefs.avatarImage._id) {
console.log(`Copying avatarImage`);
chatsterDoc.avatarImage = prefs.avatarImage;
console.log(`id of avatarImage = ${prefs.avatarImage._id}`);
}
}
chatster.replaceOne({ _id: user._id }, chatsterDoc, { upsert: true })
.then (() => {
console.log(`Wrote Chatster document for _id: ${docId}`);
}, error => {
console.log(`Failed to write Chatster document for _id=${docId}: ${error}`);
});
if (user.conversations && user.conversations.length > 0) {
for (i = 0; i < user.conversations.length; i++) {
let membersToAdd = ];
if (user.conversations[i].members.length > 0) {
for (j = 0; j < user.conversations[i].members.length; j++) {
if (user.conversations[i].members[j].membershipStatus == "User added, but invite pending") {
membersToAdd.push(user.conversations[i].members[j].userName);
user.conversations[i].members[j].membershipStatus = "Membership active";
conversationsChanged = true;
}
}
}
if (membersToAdd.length > 0) {
userCollection.updateMany({userName: {$in: membersToAdd}}, {$push: {conversations: user.conversations[i]}})
.then (result => {
console.log(`Updated ${result.modifiedCount} other User documents`);
}, error => {
console.log(`Failed to copy new conversation to other users: ${error}`);
});
}
}
}
if (conversationsChanged) {
userCollection.updateOne({_id: user._id}, {$set: {conversations: user.conversations}});
}
break;
case "delete":
chatster.deleteOne({_id: docId})
.then (() => {
console.log(`Deleted Chatster document for _id: ${docId}`);
}, error => {
console.log(`Failed to delete Chatster document for _id=${docId}: ${error}`);
});
break;
}
};
```
Note that the `Chatster` document is created with `partition` set to `"all-users=all-the-users"`. This is what makes the document accessible by any user.
## Accessing Realm Partitions from Your Mobile App (iOS or Android)
In this section, you'll learn how to request a partition when opening a Realm. If you want more of a primer on using Realm in a mobile app, then these are suitable resources:
- [Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine (iOS, Swift, SwiftUI) A good intro, but there have been some enhancements to the Realm SDK since it was written.
- Building a Mobile Chat App Using Realm – Data Architecture (iOS, Swift, SwiftUI) This series involves a more complex app, but it uses the latest SwiftUI features in the Realm SDK.
- Building an Android Emoji Garden on Jetpacks! (Compose) with Realm (Android, Kotlin, Jetpack Compose)
First of all, note that you don't need to include the partition key in your iOS or Android `Object` definitions. They are handled automatically by Realm.
All you need to do is specify the partition value when opening a synced Realm:
::::tabs
:::tab]{tabid="Swift"}
``` swift
ChatRoomBubblesView(conversation: conversation)
.environment(
\.realmConfiguration,
app.currentUser!.configuration(partitionValue: "conversation=\(conversation.id)"))
```
:::
:::tab[]{tabid="Kotlin"}
``` kotlin
val config: SyncConfiguration = SyncConfiguration.defaultConfig(user, "conversation=${conversation.id}")
syncedRealm = Realm.getInstance(config)
```
:::
::::
## Summary
At this point, you've hopefully learned:
- That MongoDB Realm Sync partitioning is a great way to control data privacy and storage requirements in your mobile app.
- How Realm partitioning works.
- A number of partitioning strategies.
- How to combine strategies to build the optimal solution for your mobile app.
- How to implement your partitioning strategy in your backend Realm app and in your iOS/Android mobile apps.
## Resources
- [Building a Mobile Chat App Using Realm – Data Architecture.
- Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine.
- Building an Android Emoji Garden on Jetpacks! (Compose) with Realm.
- Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps.
- MongoDB Realm Sync docs.
- MongoDB Realm Sync partitioning docs.
- Realm iOS SDK.
- Realm Kotlin SDK.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Realm"
],
"pageDescription": "How to use Realm partitions to make your app efficient, performant, and secure.",
"contentType": "Tutorial"
} | Realm Partitioning Strategies | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/gemma-mongodb-huggingface-rag | created | # Building a RAG System With Google's Gemma, Hugging Face and MongoDB
## Introduction
Google recently released a state-of-the-art open model into the AI community called Gemma. Specifically, Google released four variants of Gemma: Gemma 2B base model, Gemma 2B instruct model, Gemma 7B base model, and Gemma 7B instruct model. The Gemma open model and its variants utilise similar building blocks as Gemini, Google’s most capable and efficient foundation model built with Mixture-of-Expert (MoE) architecture.
**This article presents how to leverage Gemma as the foundation model in a retrieval-augmented generation** (**RAG) pipeline or system, with supporting models provided by Hugging Face, a repository for open-source models, datasets, and compute resources.** The AI stack presented in this article utilises the GTE large embedding models from Hugging Face and MongoDB as the vector database.
**Here’s what to expect from this article:**
- Quick overview of a RAG system
- Information on Google’s latest open model, Gemma
- Utilising Gemma in a RAG system as the base model
- Building an end-to-end RAG system with an open-source base and
embedding models from Hugging Face*
, which has a notebook version of the RAG system presented in this article.
The shell command sequence below installs libraries for leveraging open-source large language models (LLMs), embedding models, and database interaction functionalities. These libraries simplify the development of a RAG system, reducing the complexity to a small amount of code:
```
!pip install datasets pandas pymongo sentence_transformers
!pip install -U transformers
# Install below if using GPU
!pip install accelerate
```
- **PyMongo:** A Python library for interacting with MongoDB that enables functionalities to connect to a cluster and query data stored in collections and documents.
- **Pandas**: Provides a data structure for efficient data processing and analysis using Python
- **Hugging Face datasets:** Holds audio, vision, and text datasets
- **Hugging Face Accelerate**: Abstracts the complexity of writing code that leverages hardware accelerators such as GPUs. Accelerate is leveraged in the implementation to utilise the Gemma model on GPU resources.
- **Hugging Face Transformers**: Access to a vast collection of pre-trained models
- **Hugging Face Sentence Transformers**: Provides access to sentence, text, and image embeddings.
## Step 2: data sourcing and preparation
The data utilised in this tutorial is sourced from Hugging Face datasets, specifically the AIatMongoDB/embedded\_movies dataset.
A datapoint within the movie dataset contains attributes specific to an individual movie entry; plot, genre, cast, runtime, and more are captured for each data point. After loading the dataset into the development environment, it is converted into a Pandas DataFrame object, which enables efficient data structure manipulation and analysis.
```python
# Load Dataset
from datasets import load_dataset
import pandas as pd
# https://huggingface.co/datasets/MongoDB/embedded_movies
dataset = load_dataset("MongoDB/embedded_movies")
# Convert the dataset to a pandas DataFrame
dataset_df = pd.DataFrame(dataset'train'])
```
The operations within the following code snippet below focus on enforcing data integrity and quality.
1. The first process ensures that each data point's `fullplot` attribute is not empty, as this is the primary data we utilise in the embedding process.
2. This step also ensures we remove the `plot_embedding` attribute from all data points as this will be replaced by new embeddings created with a different embedding model, the `gte-large`.
```python
# Remove data point where plot column is missing
dataset_df = dataset_df.dropna(subset=['fullplot'])
print("\nNumber of missing values in each column after removal:")
print(dataset_df.isnull().sum())
# Remove the plot_embedding from each data point in the dataset as we are going to create new embeddings with an open-source embedding model from Hugging Face: gte-large
dataset_df = dataset_df.drop(columns=['plot_embedding'])
```
## Step 3: generating embeddings
**Embedding models convert high-dimensional data such as text, audio, and images into a lower-dimensional numerical representation that captures the input data's semantics and context.** This embedding representation of data can be used to conduct semantic searches based on the positions and proximity of embeddings to each other within a vector space.
The embedding model used in the RAG system is the Generate Text Embedding (GTE) model, based on the BERT model. The GTE embedding models come in three variants, mentioned below, and were trained and released by Alibaba DAMO Academy, a research institution.
| | | |
| ---------------------- | ------------- | --------------------------------------------------------------------------- |
| **Model** | **Dimension** | **Massive Text Embedding Benchmark (MTEB) Leaderboard Retrieval (Average)** |
| GTE-large | 1024 | 52.22 |
| GTE-base | 768 | 51.14 |
| GTE-small | 384 | 49.46 |
| text-embedding-ada-002 | 1536 | 49.25 |
| text-embedding-3-small | 256 | 51.08 |
| text-embedding-3-large | 256 | 51.66 |
In the comparison between open-source embedding models GTE and embedding models provided by OpenAI, the GTE-large embedding model offers better performance on retrieval tasks but requires more storage for embedding vectors compared to the latest embedding models from OpenAI. Notably, the GTE embedding model can only be used on English texts.
The code snippet below demonstrates generating text embeddings based on the text in the "fullplot" attribute for each movie record in the DataFrame. Using the SentenceTransformers library, we get access to the "thenlper/gte-large" model hosted on Hugging Face. If your development environment has limited computational resources and cannot hold the embedding model in RAM, utilise other variants of the GTE embedding model: [gte-base or gte-small.
The steps in the code snippets are as follows:
1. Import the `SentenceTransformer` class to access the embedding models.
2. Load the embedding model using the `SentenceTransformer` constructor
to instantiate the `gte-large` embedding model.
3. Define the `get_embedding function`, which takes a text string as
input and returns a list of floats representing the embedding. The
function first checks if the input text is not empty (after
stripping whitespace). If the text is empty, it returns an empty
list. Otherwise, it generates an embedding using the loaded model.
4. Generate embeddings by applying the `get_embedding` function to the
"fullplot" column of the `dataset_df` DataFrame, generating
embeddings for each movie's plot. The resulting list of embeddings
is assigned to a new column named embedding.
```python
from sentence_transformers import SentenceTransformer
# https://huggingface.co/thenlper/gte-large
embedding_model = SentenceTransformer("thenlper/gte-large")
def get_embedding(text: str) -> listfloat]:
if not text.strip():
print("Attempted to get embedding for empty text.")
return []
embedding = embedding_model.encode(text)
return embedding.tolist()
dataset_df["embedding"] = dataset_df["fullplot"].apply(get_embedding)
```
After this section, we now have a complete dataset with embeddings that can be ingested into a vector database, like MongoDB, where vector search operations can be performed.
## Step 4: database setup and connection
Before moving forward, ensure the following prerequisites are met
- Database cluster set up on MongoDB Atlas
- Obtained the URI to your cluster
For assistance with database cluster setup and obtaining the URI, refer to our guide for [setting up a MongoDB cluster and getting your connection string. Alternatively, follow Step 5 of this article on using embeddings in a RAG system, which offers detailed instructions on configuring and setting up the database cluster.
Once you have created a cluster, create the database and collection within the MongoDB Atlas cluster by clicking **+ Create Database**. The database will be named movies, and the collection will be named movies\_records.
guide.
In the creation of a vector search index using the JSON editor on MongoDB Atlas, ensure your vector search index is named **vector\_index** and the vector search index definition is as follows:
```
{
"fields": {
"numDimensions": 1024,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}]
}
```
The 1024 value of the numDimension field corresponds to the dimension of the vector generated by the gte-large embedding model. If you use the `gte-base` or `gte-small` embedding models, the numDimension value in the vector search index must be set to **768** and **384**, respectively.
## Step 6: data ingestion and Vector Search
Up to this point, we have successfully done the following:
- Loaded data sourced from Hugging Face
- Provided each data point with embedding using the GTE-large embedding
model from Hugging Face
- Set up a MongoDB database designed to store vector embeddings
- Established a connection to this database from our development
environment
- Defined a vector search index for efficient querying of vector
embeddings
Ingesting data into a MongoDB collection from a pandas DataFrame is a straightforward process that can be efficiently accomplished by converting the DataFrame into dictionaries and then utilising the `insert_many` method on the collection to pass the converted dataset records.
```python
documents = dataset_df.to_dict('records')
collection.insert_many(documents)
print("Data ingestion into MongoDB completed")
```
The operations below are performed in the code snippet:
1. Convert the dataset DataFrame to a dictionary using the`to_dict('records')` method on `dataset_df`. This method transforms the DataFrame into a list of dictionaries. The `records` parameter is crucial as it encapsulates each row as a single dictionary.
2. Ingest data into the MongoDB vector database by calling the `insert_many(documents)` function on the MongoDB collection, passing it the list of dictionaries. MongoDB's `insert_many` function ingests each dictionary from the list as an individual document within the collection.
The following step implements a function that returns a vector search result by generating a query embedding and defining a MongoDB aggregation pipeline.
The pipeline, consisting of the `$vectorSearch` and `$project` stages, executes queries using the generated vector and formats the results to include only the required information, such as plot, title, and genres while incorporating a search score for each result.
```python
def vector_search(user_query, collection):
"""
Perform a vector search in the MongoDB collection based on the user query.
Args:
user_query (str): The user's query string.
collection (MongoCollection): The MongoDB collection to search.
Returns:
list: A list of matching documents.
"""
# Generate embedding for the user query
query_embedding = get_embedding(user_query)
if query_embedding is None:
return "Invalid query or embedding generation failed."
# Define the vector search pipeline
pipeline = [
{
"$vectorSearch": {
"index": "vector_index",
"queryVector": query_embedding,
"path": "embedding",
"numCandidates": 150, # Number of candidate matches to consider
"limit": 4, # Return top 4 matches
}
},
{
"$project": {
"_id": 0, # Exclude the _id field
"fullplot": 1, # Include the plot field
"title": 1, # Include the title field
"genres": 1, # Include the genres field
"score": {"$meta": "vectorSearchScore"}, # Include the search score
}
},
]
# Execute the search
results = collection.aggregate(pipeline)
return list(results)
```
The code snippet above conducts the following operations to allow semantic search for movies:
1. Define the `vector_search` function that takes a user's query string and a MongoDB collection as inputs and returns a list of documents that match the query based on vector similarity search.
2. Generate an embedding for the user's query by calling the previously defined function, `get_embedding`, which converts the query string into a vector representation.
3. Construct a pipeline for MongoDB's aggregate function, incorporating two main stages: `$vectorSearch` and `$project`.
4. The `$vectorSearch` stage performs the actual vector search. The`index` field specifies the vector index to utilise for the vector search, and this should correspond to the name entered in the vector search index definition in previous steps. The `queryVector` field takes the embedding representation of the use query. The `path` field corresponds to the document field containing the embeddings. The `numCandidates` specifies the number of candidate documents to consider and the limit on the number of results to return.
5. The `$project` stage formats the results to include only the required fields: plot, title, genres, and the search score. It explicitly excludes the `_id` field.
6. The `aggregate` executes the defined pipeline to obtain the vector search results. The final operation converts the returned cursor from the database into a list.
## Step 7: handling user queries and loading Gemma
The code snippet defines the function `get_search_result`, a custom wrapper for performing the vector search using MongoDB and formatting the results to be passed to downstream stages in the RAG pipeline.
```python
def get_search_result(query, collection):
get_knowledge = vector_search(query, collection)
search_result = ""
for result in get_knowledge:
search_result += f"Title: {result.get('title', 'N/A')}, Plot: {result.get('fullplot', 'N/A')}\n"
return search_result
```
The formatting of the search results extracts the title and plot using the get method and provides default values ("N/A") if either field is missing. The returned results are formatted into a string that includes both the title and plot of each document, which is appended to `search_result`, with each document's details separated by a newline character.
The RAG system implemented in this use case is a query engine that conducts movie recommendations and provides a justification for its selection.
```python
# Conduct query with retrieval of sources
query = "What is the best romantic movie to watch and why?"
source_information = get_search_result(query, collection)
combined_information = f"Query: {query}\nContinue to answer the query by using the Search Results:\n{source_information}."
print(combined_information)
```
A user query is defined in the code snippet above; this query is the target for semantic search against the movie embeddings in the database collection. The query and vector search results are combined into a single string to pass as a full context to the base model for the RAG system.
The following steps below load the Gemma-2b instruction model (“google/gemma-2b-it") into the development environment using the Hugging Face Transformer library. Specifically, the code snippet below loads a tokenizer and a model from the Transformers library by Hugging Face.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
# CPU Enabled uncomment below 👇🏽
# model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
# GPU Enabled use below 👇🏽
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto")
```
**Here are the steps to load the Gemma open model:**
1. Import `AutoTokenizer` and `AutoModelForCausalLM` classes from the transformers module.
2. Load the tokenizer using the `AutoTokenizer.from_pretrained` method to instantiate a tokenizer for the "google/gemma-2b-it" model. This tokenizer converts input text into a sequence of tokens that the model can process.
3. Load the model using the `AutoModelForCausalLM.from_pretrained`method. There are two options provided for model loading, and each one accommodates different computing environments.
4. CPU usage: For environments only utilising CPU for computations, the model can be loaded without specifying the `device_map` parameter.
5. GPU usage: The `device_map="auto"` parameter is included for environments with GPU support to map the model's components automatically to available GPU compute resources.
```python
# Moving tensors to GPU
input_ids = tokenizer(combined_information, return_tensors="pt").to("cuda")
response = model.generate(**input_ids, max_new_tokens=500)
print(tokenizer.decode(response[0]))
```
**The steps to process user inputs and Gemma’s output are as follows:**
1. Tokenize the text input `combined_information` to obtain a sequence of numerical tokens as PyTorch tensors; the result of this operation is assigned to the variable `input_ids`.
2. The `input_ids` are moved to the available GPU resource using the \`.to(“cuda”)\` method; the aim is to speed up the model’s computation.
3. Generate a response from the model by involving the`model.generate` function with the input\_ids tensor. The max_new_tokens=500 parameter limits the length of the generated text, preventing the model from producing excessively long outputs.
4. Finally, decode the model’s response using the `tokenizer.decode`method, which converts the generated tokens into a readable text string. The `response[0]` accesses the response tensor containing the generated tokens.
| | |
| ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Query** | **Gemma’s responses** |
| What is the best romantic movie to watch and why? | Based on the search results, the best romantic movie to watch is \*\*Shut Up and Kiss Me!\*\* because it is a romantic comedy that explores the complexities of love and relationships. The movie is funny, heartwarming, and thought-provoking |
***
## Conclusion
The implementation of a RAG system in this article utilised entirely open datasets, models, and embedding models available via Hugging Face. Utilising Gemma, it’s possible to build RAG systems with models that do not rely on the management and availability of models from closed-source model providers.
The advantages of leveraging open models include transparency in the training details of models utilised, the opportunity to fine-tune base models for further niche task utilisation, and the ability to utilise private sensitive data with locally hosted models.
To better understand open vs. closed models and their application to a RAG system, we have an [article implements an end-to-end RAG system using the POLM stack, which leverages embedding models and LLMs provided by OpenAI.
All implementation steps can be accessed in the repository, which has a notebook version of the RAG system presented in this article.
***
## FAQs
**1. What are the Gemma models?**
Gemma models are a family of lightweight, state-of-the-art open models for text generation, including question-answering, summarisation, and reasoning. Inspired by Google's Gemini, they are available in 2B and 7B sizes, with pre-trained and instruction-tuned variants.
**2. How do Gemma models fit into a RAG system?**
In a RAG system, Gemma models are the base model for generating responses based on input queries and source information retrieved through vector search. Their efficiency and versatility in handling a wide range of text formats make them ideal for this purpose.
**3. Why use MongoDB in a RAG system?**
MongoDB is used for its robust management of vector embeddings, enabling efficient storage, retrieval, and querying of document vectors. MongoDB also serves as an operational database that enables traditional transactional database capabilities. MongoDB serves as both the operational and vector database for modern AI applications.
**4. Can Gemma models run on limited resources?**
Despite their advanced capabilities, Gemma models are designed to be deployable in environments with limited computational resources, such as laptops or desktops, making them accessible for a wide range of applications. Gemma models can also be deployed using deployment options enabled by Hugging Face, such as inference API, inference endpoints and deployment solutions via various cloud services.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfb7f68b3bf810100/65d77918421dd35b0bebcb33/Screenshot_2024-02-22_at_16.40.40.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7ef2d37427c35b06/65d78ef8745ebcf6d39d4b6b/GenAI_Stack_(7).png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "This article presents how to leverage Gemma as the foundation model in a Retrieval-Augmented Generation (RAG) pipeline or system, with supporting models provided by Hugging Face, a repository for open-source models, datasets and compute resources.",
"contentType": "Tutorial"
} | Building a RAG System With Google's Gemma, Hugging Face and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/optimize-atlas-performance-advisor-query-analyzer-more | created | # Optimize With MongoDB Atlas: Performance Advisor, Query Analyzer, and More
Optimizing MongoDB performance involves understanding the intricacies of your database's schema and queries, and navigating this landscape might seem daunting. There can be a lot to keep in mind, but MongoDB Atlas provides several tools to help spot areas where how you interact with your data can be improved.
In this tutorial, we're going to go through what some of these tools are, where to find them, and how we can use what they tell us to get the most out of our database. Whether you're a DBA, developer, or just a MongoDB enthusiast, our goal is to empower you with the knowledge to harness the full potential of your data.
## Identify schema anti-patterns
As your application grows and use cases evolve, potential problems can present themselves in what was once a well-designed schema. How can you spot these? Well, in Atlas, from the data explorer screen, select the collection you'd like to examine. Above the displayed documents, you'll see a tab called "Schema Anti-Patterns."
Now, in my collection, I have a board that describes the tasks necessary for our next sprint, so my documents look something like this:
```json
{
"boardName": "Project Alpha",
"boardId": "board123",
"tasks":
{
"taskId": "task001",
"title": "Design Phase",
"description": "Complete the initial design drafts.",
"status": "In Progress",
"assignedTo": ["user123", "user456"],
"dueDate": "2024-02-15",
},
// 10,000 more tasks
]
}
```
While this worked fine when our project was small in scope, the lists of tasks necessary really grew out of control (relatable, I'm sure). Let's pop over to our schema anti-pattern tab and see what it says.
![Collection schema anti-pattern page][1]
From here, you'll be provided with a list of anti-patterns detected in your database and some potential fixes. If we click the "Avoid using unbounded arrays in documents" item, we can learn a little more.
![Collection schema anti-pattern page, dropdown for more info.][2]
This collection has a few problems. Inside my documents, I have a substantial array. Large arrays can cause multiple issues, from exceeding the limit size on documents (16 MB) to degrading the performance of indexes as the arrays grow in size. Now that I have identified this, I can click "Learn How to Fix This Issue" to be taken to the [MongoDB documentation. In this case, the solution mentioned is referencing. This involves storing the tasks in a separate collection and having a field to indicate what board they belong to. This will solve my issue of the unbounded array.
Now, every application is unique, and thus, how you use MongoDB to leverage your data will be equally unique. There is rarely one right answer for how to model your data with MongoDB, but with this tool, you are able to see what is slowing down your database and what you might consider changing — from unused indexes that are increasing your write operation times to over-reliance on the expensive `$lookup` operation, when embedded documents would do.
## Performance Advisor
While you continue to use your MongoDB database, performance should always be at the back of your mind. Slow performance can hamper the user's experience with your application and can sometimes even make it unusable. With larger datasets and complex operations, these slow operations can become harder to avoid without conscious effort. The Performance Advisor provides a holistic view of your cluster, and as the name suggests, can help identify and solve the performance issues.
The Performance Advisor is a tool available for M10+ clusters and serverless instances. It monitors queries that MongoDB considers slow, based on how long operations on your cluster typically take. When you open up your cluster in MongoDB Atlas, you'll see a tab called "Performance Advisor."
, we have a database containing information on New York City taxi rides. A typical query on the application would look something like this:
```shell
db.yellow.find({ "dropoff_datetime": "2014-06-19 21:45:00",
"passenger_count": 1,
"trip_distance": {"$gt": 3 }
})
```
With a large enough collection, running queries on specific field data will generate potentially slow operations without properly indexed collections. If we look at suggested indexes, we're presented with this screen, displaying the indexes we may want to create.
.
or to our Developer Community Forums to see what other people are building.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta5008804d2f3ad0e/65b8cf0893cdf11de27cafc1/image3.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7fc9b26b8e7b5dcb/65b8cf077d4ae74bf4980919/image1.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcae2445ee8c4e5b1/65b8cf085f12eda542e220d7/image4.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3cddcbae41a303f4/65b8cf087d4ae7e2ee98091d/image5.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4ca5dd956a10c74c/65b8cf0830d47e0c7f5222f7/image7.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt07b872aa93e325e8/65b8cf0855a88a1fc1da7053/image6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt64042035d566596c/65b8cf088fc5c08d430bcb76/image2.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to get the most out of your MongoDB database using the tools provided to you by MongoDB Atlas.",
"contentType": "Tutorial"
} | Optimize With MongoDB Atlas: Performance Advisor, Query Analyzer, and More | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/swift/authentication-ios-apps-atlas-app-services | created | # Authentication for Your iOS Apps with Atlas App Services
Authentication is one of the most important features for any app these days, and there will be a point when your users might want to reset their password for different reasons.
Atlas App Services can help implement this functionality in a clear and simple way. In this tutorial, we’ll develop a simple app that you can follow along with and incorporate into your apps.
If you also want to follow along and check the code that I’ll be explaining in this article, you can find it in the Github repository.
## Context
The application consists of a login flow where the user will be able to create their own account by using a username/password. It will also allow them to reset the password by implementing the use of Atlas App Services for it and Universal Links.
There are different options in order to implement this functionality.
* You can configure an email provider to send a password reset email. This option will send an email to the user with the MongoDB logo and a URL that contains the necessary parameters that will be needed in order to reset the password.
* App Services can automatically run a password reset function. You can implement it guided by our password reset documentation. App Services passes this function unique confirmation tokens and data about the user. Use these values to define custom logic to reset a user's password.
* If you decide to use a custom password reset email from a specific domain by using an external service, when the email for the reset password is received, you will get a URL that will be valid for 30 minutes, and you will need to implement Universal Links for it so your app can detect the URL when the user taps on it and extract the tokens from it.
* You can define a function for App Services to run when you callResetPasswordFunction() in the SDK. App Services passes this function with unique confirmation tokens.
For this tutorial, we are going to use the first option. When it gets triggered, it will send the user an email and a valid URL for 30 minutes. But please be aware that we do not recommend using this option in production. Confirmation emails are not currently customizable beyond the base URL and subject line. In particular, they always come from a mongodb.com email address. For production apps, we recommend using a confirmation function. You can check how to run a confirmation function in our MongoDB documentation.
## Configuring authentication
First, you’ll need to create your Atlas App Services App. I recommend following our documentation and this will provide you with the base to start configuring your app.
After creating your app, go to the **Atlas App Services** tab, click on your app, and go to **Data Access → Authentication** on the sidebar.
In the Authentication Providers section, enable the provider **Email/Password**. In the configuration window that will get displayed after, we will focus on the **Password Reset Method** part.
For this example, the user confirmation will be done automatically. But make sure that the **Send a password reset email** option is enabled.
One important thing to note is that **you won’t be able to save and deploy these changes unless the URL section is completed**. Therefore, we’ll use a temporary URL and we’ll change it later to the final one.
Click on the Save Draft button and your changes will be deployed.
### Implementing the reset password functionality
Before starting to write the related code, please make sure that you have followed this quick start guide to make sure that you can use our Swift SDK.
The logic of implementing reset password will be implemented in the `MainViewController.swift` file. In it, we have an IBAction called `resetPasswordButtonTapped`, and inside we are going to write the following code:
``` swift
@IBAction func resetPasswordButtonTapped(_ sender: Any) {
let email = app.currentUser?.profile.email ?? ""
let client = app.emailPasswordAuth
client.sendResetPasswordEmail(email) { (error) in
DispatchQueue.main.async {
guard error == nil else {
print("Reset password email not sent: \(error!.localizedDescription)")
return
}
print("Password reset email sent to the following address: \(email)")
let alert = UIAlertController(title: "Reset Password", message: "Please check your inbox to continue the process", preferredStyle: UIAlertController.Style.alert)
alert.addAction(UIAlertAction(title: "OK", style: UIAlertAction.Style.default, handler: nil))
self.present(alert, animated: true, completion: nil)
}
}
}
```
By making a call to `client.sendResetPasswordEmail` with the user's email, App Services sends an email to the user that contains a unique URL. The user must visit this URL within 30 minutes to confirm the reset.
Now we have the first part of the functionality implemented. But if we try to tap on the button, it won’t work as expected. We must go back to our Atlas App Services App, to the Authentication configuration.
The URL that we define here will be the one that will be sent in the email to the user. You can use your own from your own website hosted on a different server but if you don’t, don’t worry! Atlas App Services provides Static Hosting. You can use hosting to store individual pieces of content or to upload and serve your entire client application, but please note that in order to enable static hosting, **you must have a paid tier** (i.e M2 or higher).
## Configuring hosting
Go to the Hosting section of your Atlas App Services app and click on the Enable Hosting button. App Services will begin provisioning hosting for your application. It may take a few minutes for App Services to finish provisioning hosting for your application once you've enabled it.
The resource path that you see in the screenshot above is the URL that will be used to redirect the user to our website so they can continue the process of resetting their password.
Now we have to go back to the Authentication section in your Atlas App Services app and tap on the Edit button for Email/Password. We will focus our attention on the lower area of the window.
In the Password Reset URL we are going to add our hosted URL. This will create the link between your back end and the email that gets sent to the user.
The base of the URL is included in every password reset email. App Services appends a unique `token` and `tokenId` to this URL. These serve as query parameters to create a unique link for every password reset. To reset the user's password, extract these query parameters from the user's unique URL.
In order to extract these query parameters and use them in our client application, we can use Universal Links.
## Universal links
According to Apple, when adding universal links support to your app, your users can tap a link to your website and get seamlessly redirected to your installed app without going through Safari. But if the app isn’t installed, then tapping a link to your website will open it in Safari.
**Note**: Be aware that in order to add the universal links entitlement to your Xcode project, you need to have an Apple Developer subscription.
#1 Add the **Associated Domains** entitlement to the **Signing & Capabilities** section of your project on Xcode and add to the domains the URL from your hosted website following the syntax: `>applinks:`
#2 You now need to create an `apple-app-site-association` file that contains JSON data about the URL that the app will handle. In my case, this is the structure of my file. The value of the `appID` key is the team ID or app ID prefix, followed by the bundle ID.
``` json
{
"applinks": {
"apps": ],
"details": [
{
"appID": "QX5CR2FTN2.io.realm.marcabrera.aries",
"paths": [ "*" ]
}
]
}
}
```
#3 Upload the file to your HTTPS web server. In my case, I’ll update it to my Atlas App Services hosted website. Therefore, now I have two files including `index.html`.
![hosting section, Atlas App Services
### Code
You need to implement the code that will handle the functionality when your user taps on the link from the received email.
Go to the `SceneDelegate.swift` file of your Xcode project, and on the continue() delegate method, add the following code:
``` swift
func scene(_ scene: UIScene, continue userActivity: NSUserActivity) {
if let url = userActivity.webpageURL {
handleUniversalLinks(url)
}
}
```
``` swift
func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
guard let _ = (scene as? UIWindowScene) else { return }
// UNIVERSAL LINKS HANDLING
guard let userActivity = connectionOptions.userActivities.first, userActivity.activityType == NSUserActivityTypeBrowsingWeb,
let incomingURL = userActivity.webpageURL else {
// If we don't get a link (meaning it's not handling the reset password flow then we have to check if user is logged in)
if let _ = app.currentUser {
// We make sure that the session is being kept active for users that have previously logged in
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let tabBarController = storyboard.instantiateViewController(identifier: "TabBarController")
let navigationController = UINavigationController(rootViewController: tabBarController)
}
return
}
handleUniversalLinks(incomingURL)
}
```
``` swift
private func handleUniversalLinks(_ url: URL) {
// We get the token and tokenId URL parameters, they're necessary in order to reset password
let token = url.valueOf("token")
let tokenId = url.valueOf("tokenId")
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let resetPasswordViewController = storyboard.instantiateViewController(identifier: "ResetPasswordViewController") as! ResetPasswordViewController
resetPasswordViewController.token = token
resetPasswordViewController.tokenId = tokenId
}
```
The `handleUniversalLinks()` private method will extract the `token` and `tokenId` parameters that we need to use in order to reset the password. We will store them as properties on the `ResetPassword` view controller.
Also note that we use the function `url.valueOf(“token”)`, which is an extension that I have created in order to extract the query parameters that match the string that we pass as an argument and store its value in the `token` variable.
``` swift
extension URL {
// Function that returns a specific query parameter from the URL
func valueOf(_ queryParameterName: String) -> String? {
guard let url = URLComponents(string: self.absoluteString) else { return nil }
return url.queryItems?.first(where: {$0.name == queryParameterName})?.value
}
}
```
**Note**: This functionality won’t work if the user decides to terminate the app and it’s not in the foreground. For that, we need to implement similar functionality on the `willConnectTo()` delegate method.
``` swift
func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
// Use this method to optionally configure and attach the UIWindow `window` to the provided UIWindowScene `scene`.
// If using a storyboard, the `window` property will automatically be initialized and attached to the scene.
// This delegate does not imply the connecting scene or session are new (see `application:configurationForConnectingSceneSession` instead).
guard let _ = (scene as? UIWindowScene) else { return }
// UNIVERSAL LINKS HANDLING
guard let userActivity = connectionOptions.userActivities.first, userActivity.activityType == NSUserActivityTypeBrowsingWeb,
let incomingURL = userActivity.webpageURL else {
// If we don't get a link (meaning it's not handling the reset password flow then we have to check if user is logged in)
if let _ = app.currentUser {
// We make sure that the session is being kept active for users that have previously logged in
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let mainVC = storyboard.instantiateViewController(identifier: "MainViewController")
window?.rootViewController = mainVC
window?.makeKeyAndVisible()
}
return
}
handleUniversalLinks(incomingURL)
}
```
## Reset password
This view controller contains a text field that will capture the new password that the user wants to set up, and when the Reset Password button is tapped, the `resetPassword` function will get triggered and it will make a call to the Client SDK’s resetPassword() function. If there are no errors, a success alert will be displayed on the app. Otherwise, an error message will be displayed.
``` swift
private func resetPassword() {
let password = confirmPasswordTextField.text ?? ""
app.emailPasswordAuth.resetPassword(to: password, token: token ?? "", tokenId: tokenId ?? "") { (error) in
DispatchQueue.main.async {
self.confirmButton.hideLoading()
guard error == nil else {
print("Failed to reset password: \(error!.localizedDescription)")
self.presentErrorAlert(message: "There was an error resetting the password")
return
}
print("Successfully reset password")
self.presentSuccessAlert()
}
}
}
```
## Repository
The code for this project can be found in the Github repository.
I hope you found this tutorial useful and that it will solve any doubts you may have! I encourage you to explore our Realm Swift SDK documentation so you can check all the features and advantages that Realm can offer you while developing your iOS apps. We also have a lot of resources for you to dive in and learn how to implement them. | md | {
"tags": [
"Swift",
"Atlas",
"iOS"
],
"pageDescription": "Learn how to easily implement reset password functionality thanks to Atlas App Services on your iOS apps.",
"contentType": "Tutorial"
} | Authentication for Your iOS Apps with Atlas App Services | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/whatsapp-business-api-data-api | created | # WhatsApp Business API Webhook Integration with Data API
This tutorial walks through integrating the WhatsApp Business API --- specifically, Cloud API --- and webhook setup in front of the MongoDB Atlas Data API.
The most interesting thing is we are going to use MongoDB Custom HTTPS Endpoints and Atlas Functions.
The WhatsApp Business Cloud API is intended for people developing for themselves or their organization and is also similar for Business Solution Providers (BSPs).
Webhook will trigger whenever a business phone number receives a message, updates the current state of the sent message, and more.
We will examine a way to set up webhooks to connect with WhatsApp, in addition to how to set up a function that sends/receives messages and stores them in the MongoDB database.
## Prerequisites
The core requirement is a Meta Business account. If you don't have a business account, then you can also use the Test Business account that is provided by Meta. Refer to the article Create a WhatsApp Business Platform account for more information.
WhatsApp Business Cloud API is a part of Meta's Graph API, so you need to set up a Meta Developer account and a Meta developer app. You can follow the instructions from the Get Started with Cloud API, hosted by Meta guide and complete all the steps explained in the docs to set everything up. When you create your application, make sure that you create an "Enterprise" application and that you add WhatsApp as a service. Once your application is created, find the following and store them somewhere.
- Access token: You can use a temporary access token from your developer app > WhatsApp > Getting Started page, or you can generate a permanent access token.
- Phone number ID: You can find it from your developer app > WhatsApp > Getting Started page. It has the label "Phone number ID", and can be found under the "From" section.
Next, you'll need to set up a MongoDB Atlas account, which you can learn how to do using the MongoDB Getting Started with Atlas article. Once your cluster is ready, create a database called `WhatsApp` and a collection called `messages`. You can leave the collection empty for now.
Once you have set up your MongoDB Atlas cluster, refer to the article on how to create an App Services app to create your MongoDB App Services application. On the wizard screen asking you for the type of application to build, choose the "Build your own App" template.
## Verification Requests endpoint for webhook
The first thing you need to configure in the WhatsApp application is a verification request endpoint. This endpoint will validate the key and provides security to your application so that not everyone can use your endpoints to send messages.
When you configure a webhook in the WhatsApp Developer App Dashboard, it will send a GET request to the Verification Requests endpoint. Let's write the logic for this endpoint in a function and then create a custom HTTPS endpoint in Atlas.
To create a function in Atlas, use the "App Services" > "Functions" menu under the BUILD section. From that screen, click on the "Create New Function" button and it will show the Add Function page.
Here, you will see two tabs: "Settings" and "Function Editor." Start with the "Settings" tab and let's configure the required details:
- Name: Set the Function Name to `webhook_get`.
- Authentication: Select `System`. It will bypass the rule and authentication when our endpoint hits the function.
To write the code, we need to click on the "Function Editor" tab. You need to replace the code in your editor. Below is the brief of our code and how it works.
You need to set a secret value for `VERIFY_TOKEN`. You can pick any random value for this field, and you will need to add it to your WhatsApp webhook configuration later on.
The request receives three query parameters: `hub.mode`, `hub.verify_token`, and `hub.challenge`.
We need to check if `hub.mode` is `subscribe` and that the `hub.verify_token` value matches the `VERIFY_TOKEN`. If so, we return the `hub.challenge` value as a response. Otherwise, the response is forbidden.
```javascript
// this function Accepts GET requests at the /webhook endpoint. You need this URL to set up the webhook initially, refer to the guide https://developers.facebook.com/docs/graph-api/webhooks/getting-started#verification-requests
exports = function({ query, headers, body }, response) {
/**
* UPDATE YOUR VERIFY TOKEN
* This will be the Verify Token value when you set up the webhook
**/
const VERIFY_TOKEN = "12345";
// Parse params from the webhook verification request
let mode = query"hub.mode"],
token = query["hub.verify_token"],
challenge = query["hub.challenge"];
// Check the mode and token values are correct
if (mode == "subscribe" && token == VERIFY_TOKEN) {
// Respond with 200 OK and challenge token from the request
response.setStatusCode(200);
response.setBody(challenge);
} else {
// Responds with '403 Forbidden' if verify tokens do not match
response.setStatusCode(403);
}
};
```
Now, we are all ready with the function. Click on the "Save" button above the tabs section, and use the "Deploy" button in the blue bar at the top to deploy your changes.
Now, let's create a custom HTTPS endpoint to expose this function to the web. From the left navigation bar, follow the "App Services" > "HTTPS Endpoints" link, and then click on the "Add an Endpoint" button. It will show the Add Endpoint page.
Let's configure the details step by step:
1. Route: This is the name of your endpoint. Set it to `/webhook`.
2. Operation Type under Endpoint Settings: This is the read-only callback URL for an HTTPS endpoint. Copy the URL and store it somewhere. The WhatsApp Webhook configuration will need it.
3. HTTP Method under Endpoint Settings: Select the "GET" method from the dropdown.
4. Respond With Result under Endpoint Settings: Set it to "On" because WhatsApp requires the response with the exact status code.
5. Function: You will see the previously created function `webhook_get`. Select it.
We're all done. We just need to click on the "Save" button at the bottom, and deploy the application.
Wow, that was quick! Now you can go to [WhatsApp > Configuration under Meta Developer App and set up the Callback URL that we have generated in the above custom endpoint creation's second point. Click Verify Token, and enter the value that you have specified in the `VERIFY_TOKEN` constant variable of the function you just created.
## Event Notifications webhook endpoint
The Event Notifications endpoint is a POST request. Whenever new events occur, it will send a notification to the callback URL. We will cover two types of notifications: received messages and message status notifications if you have subscribed to the `messages` object under the WhatsApp Business Account product. First, we will design the schema and write the logic for this endpoint in a function and then create a custom HTTPS endpoint in Atlas.
Let's design our sample database schema and see how we will store the sent/received messages in our MongoDB collection for future use. You can reply to the received messages and see whether the user has read the sent message.
### Sent message document:
```json
{
type: "sent", // this is we sent a message from our WhatsApp business account to the user
messageId: "", // message id that is from sent message object
contact: "", // user's phone number included country code
businessPhoneId: "", // WhatsApp Business Phone ID
message: {
// message content whatever we sent
},
status: "initiated | sent | received | delivered | read | failed", // message read status by user
createdAt: ISODate(), // created date
updatedAt: ISODate() // updated date - whenever message status changes
}
```
### Received message document:
```json
{
type: "received", // this is we received a message from the user
messageId: "", // message id that is from the received message object
contact: "", // user's phone number included country code
businessPhoneId: "", // WhatsApp Business Phone ID
message: {
// message content whatever we received from the user
},
status: "ok | failed", // is the message ok or has an error
createdAt: ISODate() // created date
}
```
Let's create another function in Atlas. As before, go to the functions screen, and click the "Create New Function" button. It will show the Add Function page. Use the following settings for this new function.
- Name: Set the Function Name to `webhook_post`.
- Authentication: Select `System`. It will bypass the rule and authentication when our endpoint hits the function.
To write code, we need to click on the "Function Editor" tab. You just need to replace the code in your editor. Below is the brief of our code and how it works.
In short, this function will do either an update operation if the notification is for a message status update, or an insert operation if a new message is received.
```javascript
// Accepts POST requests at the /webhook endpoint, and this will trigger when a new message is received or message status changes, refer to the guide https://developers.facebook.com/docs/graph-api/webhooks/getting-started#event-notifications
exports = function({ query, headers, body }, response) {
body = JSON.parse(body.text());
if (body.object && body.entry) {
// Find the name of the MongoDB service you want to use (see "Linked Data Sources" tab)
const clusterName = "mongodb-atlas",
dbName = "WhatsApp",
collName = "messages";
body.entry.map(function(entry) {
entry.changes.map(function(change) {
// Message status notification
if (change.field == "messages" && change.value.statuses) {
change.value.statuses.map(function(status) {
// Update the status of a message
context.services.get(clusterName).db(dbName).collection(collName).updateOne(
{ messageId: status.id },
{
$set: {
"status": status.status,
"updatedAt": new Date(parseInt(status.timestamp)*1000)
}
}
);
});
}
// Received message notification
else if (change.field == "messages" && change.value.messages) {
change.value.messages.map(function(message) {
let status = "ok";
// Any error
if (message.errors) {
status = "failed";
}
// Insert the received message
context.services.get(clusterName).db(dbName).collection(collName).insertOne({
"type": "received", // this is we received a message from the user
"messageId": message.id, // message id that is from the received message object
"contact": message.from, // user's phone number included country code
"businessPhoneId": change.value.metadata.phone_number_id, // WhatsApp Business Phone ID
"message": message, // message content whatever we received from the user
"status": status, // is the message ok or has an error
"createdAt": new Date(parseInt(message.timestamp)*1000) // created date
});
});
}
});
});
}
response.setStatusCode(200);
};
```
Now, we are all set with the function. Click on the "Save" button above the tabs section.
Just like before, let's create a custom HTTPS endpoint in the "HTTPS Endpoints" tab. Click on the "Add an Endpoint" button, and it will show the Add Endpoint page.
Let's configure the details step by step:
1. Route: Set it to `/webhook`.
2. HTTP Method under Endpoint Settings: Select the "POST" method from the dropdown.
3. Respond With Result under Endpoint Settings: Set it to `On`.
4. Function: You will see the previously created function `webhook_post`. Select it.
We're all done. We just need to click on the "Save" button at the bottom, and then deploy the application again.
Excellent! We have just developed a webhook for sending and receiving messages and updating in the database, as well. So, you can list the conversation, see who replied to your message, and follow up.
## Send Message endpoint
Send Message Endpoint is a POST request, almost similar to the Send Messages of the WhatsApp Business API. The purpose of this endpoint is to send and store the message with `messageId` in the collection so the Event Notifications Webhook Endpoint can update the status of the message in the same document that we already developed in the previous point. We will write the logic for this endpoint in a function and then create a custom HTTPS endpoint in Atlas.
Let's create a new function in Atlas with the following settings.
- Name: Set the Function Name to `send_message`.
- Authentication: Select "System." It will bypass the rule and authentication when our endpoint hits the function.
You need to replace the code in your editor. Below is the brief of our code and how it works.
The request params should be:
- body: The request body should be the same as the WhatsApp Send Message API.
- headers: Pass Authorization Bearer token. You can use a temporary or permanent token. For more details, read the prerequisites section.
- query: Pass the business phone ID in `businessPhoneId` property. For how to access it, read the prerequisites section.
This function uses the `https` node module to call the send message API of WhatsApp business. If the message is sent successfully, then insert a document in the collection with the messageId.
```javascript
// Accepts POST requests at the /send_message endpoint, and this will allow you to send messages the same as documentation https://developers.facebook.com/docs/whatsapp/cloud-api/guides/send-messages
exports = function({ query, headers, body }, response) {
response.setHeader("Content-Type", "application/json");
body = body.text();
// Business phone ID is required
if (!query.businessPhoneId) {
response.setStatusCode(400);
response.setBody(JSON.stringify({
message: "businessPhoneId is required, you can pass in query params!"
}));
return;
}
// Find the name of the MongoDB service you want to use (see "Linked Data Sources" tab)
const clusterName = "mongodb-atlas",
dbName = "WhatsApp",
collName = "messages",
https = require("https");
// Prepare request options
const options = {
hostname: "graph.facebook.com",
port: 443,
path: `/v15.0/${query.businessPhoneId}/messages`,
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": headers.Authorization,
"Content-Length": Buffer.byteLength(body)
}
};
const req = https.request(options, (res) => {
response.setStatusCode(res.statusCode);
res.setEncoding('utf8');
let data = ];
res.on('data', (chunk) => {
data.push(chunk);
});
res.on('end', () => {
if (res.statusCode == 200) {
let bodyJson = JSON.parse(body);
let stringData = JSON.parse(data[0]);
// Insert the message
context.services.get(clusterName).db(dbName).collection(collName).insertOne({
"type": "sent", // this is we sent a message from our WhatsApp business account to the user
"messageId": stringData.messages[0].id, // message id that is from the received message object
"contact": bodyJson.to, // user's phone number included country code
"businessPhoneId": query.businessPhoneId, // WhatsApp Business Phone ID
"message": bodyJson, // message content whatever we received from the user
"status": "initiated", // default status
"createdAt": new Date() // created date
});
}
response.setBody(data[0]);
});
});
req.on('error', (e) => {
response.setStatusCode(e.statusCode);
response.setBody(JSON.stringify(e));
});
// Write data to the request body
req.write(body);
req.end();
};
```
Now, we are all ready with the function. Click on the "Save" button above the tabs section.
Let's create a custom HTTPS endpoint for this function with the following settings.
1. Route: Set it to `/send_message`.
2. HTTP Method under Endpoint Settings: Select the "POST" method from the dropdown.
3. Respond With Result under Endpoint Settings: Set it to "On."
4. Function: You will see the previously created function `send_message`. Select it.
We're all done. We just need to click on the "Save" button at the bottom.
Refer to the below curl request example. This will send a default welcome template message to the users. You just need to replace your value inside the `<>` brackets.
```bash
curl --location '?businessPhoneId=' \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{ \
"messaging_product": "whatsapp", \
"to": "", \
"type": "template", \
"template": { \
"name": "hello_world", \
"language": { "code": "en_US" } \
} \
}'
```
Great! We have just developed an endpoint that sends messages to the user's WhatsApp account from your business phone number.
## Conclusion
In this tutorial, we developed three custom HTTPS endpoints and their functions in MongoDB Atlas. One is Verification Requests, which verifies the request from WhatsApp > Developer App's webhook configuration using Verify Token. The second is Event Notifications, which can read sent messages and status updates, receive messages, and store them in MongoDB's collection. The third is Send Message, which can send messages from your WhatsApp business phone number to the user's WhatsApp account.
Apart from these things, we have built a collection for messages. You can use it for many use cases, like designing a chat conversation page where you can see the conversation and reply back to the user. You can also build your own chatbot to reply to users.
If you have any questions or feedback, check out the [MongoDB Community Forums and let us know what you think. | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "In this article, learn how to integrate the WhatsApp Business API with MongoDB Atlas functions.",
"contentType": "Tutorial"
} | WhatsApp Business API Webhook Integration with Data API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/neurelo-getting-started | created | # Neurelo and MongoDB: Getting Started and Fun Extras
Ready to hit the ground running with less code, fewer database complexities, and easier platform integration? Then this tutorial on navigating the intersection between Neurelo and MongoDB Atlas is for you.
Neurelo is a platform that utilizes AI, APIs, and the power of the cloud to help developers better interact with and manipulate their data that is stored in either MongoDB, PostgreSQL, or MySQL. This straightforward approach to data programming allows developers to work with their data from their applications, improving scalability and efficiency, while ensuring full transparency. The tutorial below will show readers how to properly set up a Neurelo account, how to connect it with their MongoDB Atlas account, how to use Neurelo’s API Playground to manipulate a collection, and how to create complex queries using Neurelo’s AI Assist feature.
Let’s get started!
### Prerequisites for success
- A MongoDB Atlas account
- A MongoDB Atlas cluster
- A Neurelo account
## The set-up
### Step 1: MongoDB Atlas Cluster
Our first step is to make sure we have a MongoDB Atlas cluster ready — if needed, learn more about how to create a cluster. Please ensure you have a memorable username and password, and that you have the proper network permissions in place. To make things easier, you can use `0.0.0.0` as the IP address, but please note that it’s not recommended for production or if you have sensitive information in your cluster.
Once the cluster is set up, load in the MongoDB sample data. This is important because we will be using the `sample_restaurants` database and the `restaurants` collection. Once the cluster is set up, let’s create our Neurelo account if not already created.
### Step 2: Neurelo account creation and project initialization
Access Neurelo’s dashboard and follow the instructions to create an account. Once finished, you will see this home screen.
Initialize a new project by clicking the orange “New” button in the middle of the screen. Fill in each section of the pop-up.
The `Organization` name is automatically filled for you but please pick a unique name for your project, select the `Database Engine` to be used (we are using MongoDB), select the language necessary for your project (this is optional since we are not using a language for this tutorial), and then fill in a description for future you or team members to know what’s going on (also an optional step).
Once you click the orange “Create” button, you’ll be shown the three options in the screenshot below. It’s encouraged for new Neurelo users to click on the “Quick Start” option. The other two options are there for you to explore once you’re no longer a novice.
You’ll be taken to this quick start. Follow the steps through.
Click on “Connect Data Source.” Please go to your MongoDB Atlas cluster and copy the connection string to your cluster. When putting in your Connection String to Neurelo, you will need to specify the database you want to use at the end of the string. There is no need to specify which collection.
Since we are using our `sample_restaurants` database for this example, we want to ensure it’s included in the Connection String. It’ll look something like this:
```
mongodb+srv://mongodb:@cluster0.xh8qopq.mongodb.net/sample_restaurants
```
Once you’re done, click “Test Connection.” If you’re unable to connect, please go into MongoDB Atlas’ Network permissions and copy in the two IP addresses on the `New Data Source` screen as it might be a network error. Once “Test Connection” is successful, hit “Submit.”
Now, click on the orange “New Environment” button. In Neurelo, environments are used so developers can run their APIs (auto-generated and using custom queries) against their data. Please fill in the fields.
Once your environment is successfully created, it’ll turn green and you can continue on to creating your Access Token. Click the orange “New Access Token” button. These tokens grant the users permission to access the APIs for a specific environment.
Store your key somewhere safe — if you lose it, you’ll need to generate a new one.
The last step is to activate the runners by clicking the button.
And congratulations! You have successfully created a project in Neurelo.
### Step 3: Filtering data using the Neurelo Playground
Now we can play around with the documents in our MongoDB collection and actually filter through them using the Playground.
In your API Playground “Headers” area, please include your Token Key in the `X-API-KEY` header. This makes it so you’re properly connected to the correct environment.
Now you can use Neurelo’s API playground to access the documents located in your MongoDB database.
Let’s say we want to return multiple documents from our restaurant category. We want to return restaurants that are located in the borough of Brooklyn in New York and we want those restaurants that serve American cuisine.
To utilize Neurelo’s API to find us five restaurants, we can click on the “GET Find many restaurants” tab in our “restaurants” collection in the sidebar, click on the `Parameters` header, and fill in our parameters as such:
```
select: {"id": true, "borough": true, "cuisine": true, "name": true}
```
```
filter: {"AND": {"borough": {"equals": "Brooklyn"}, "cuisine": {"equals": "American"}}]}
```
```
take: 5
```
Your response should look something like this:
```
{
"data": [
{
"id": "5eb3d668b31de5d588f4292a",
"borough": "Brooklyn",
"cuisine": "American",
"name": "Riviera Caterer"
},
{
"id": "5eb3d668b31de5d588f42931",
"borough": "Brooklyn",
"cuisine": "American",
"name": "Regina Caterers"
},
{
"id": "5eb3d668b31de5d588f42934",
"borough": "Brooklyn",
"cuisine": "American",
"name": "C & C Catering Service"
},
{
"id": "5eb3d668b31de5d588f4293c",
"borough": "Brooklyn",
"cuisine": "American",
"name": "The Movable Feast"
},
{
"id": "5eb3d668b31de5d588f42949",
"borough": "Brooklyn",
"cuisine": "American",
"name": "Mejlander & Mulgannon"
}
]
}
```
As you can see from our output, the `select` feature maps to our MongoDB `$project` operator. We are choosing which fields from our document to show in our output. The `filter` feature mimics our `$match` operator and the `take` feature mimics our `$limit` operator. This is just one simple example, but the opportunities truly are endless. Once you become familiar with these APIs, you can use these APIs to build your applications with MongoDB.
Neurelo truly allows developers to easily and quickly set up API calls so they can access and interact with their data.
### Step 4: Complex queries in Neurelo
If we have a use case where Neurelo’s auto-generated endpoints do not give us the results we want, we can actually create complex queries very easily in Neurelo. We are able to create our own custom endpoints for more complex queries that are necessary to filter through the results we want. These queries can be aggregation queries, find queries, or any query that MongoDB supports depending on the use case. Let’s run through an example together.
Access your Neurelo “Home” page and click on the project “Test” we created earlier. Then, click on “Definitions” on the left-hand side of the screen and click on “Custom Queries.”
![Custom queries in Neurelo
Click on the orange “New” button in the middle of the screen to add a new custom query endpoint and once the screen pops up, come up with a unique name for your query. Mine is just “complexQuery.”
With Neurelo, you can actually use their AI Assist feature to help come up with the query you’re looking for. Built upon LLMs, AI Assist for complex queries can help you come up with the code you need.
Click on the multicolored “AI Assist” button on the top right-hand corner to bring up the AI Assist tab.
Type in a prompt. Ours is:
“Please give me all restaurants that are in Brooklyn and are American cuisine.”
You can also update the prompt to include the projections to be returned. Changing the prompt to
“get me all restaurants that are in Brooklyn and serve the American cuisine and show me the name of the restaurant” will come up with something like:
As you can see, AI Assist comes up with a valid complex query that we can build upon. This is incredibly helpful especially if we aren’t familiar with syntax or if we just don’t feel like scrolling through documentation.
Edit the custom query to better help with your use case.
Click on the “Use This” button to import the query into your Custom Query box. Using the same example as before, we want to ensure we are able to see the name of the restaurant, the borough, and the cuisine. Here’s the updated version of this query:
```
{
"find": "restaurants",
"filter": {
"borough": "Brooklyn",
"cuisine": "American"
},
"projection": {
"_id": 0,
"name": 1,
"borough": 1,
"cuisine": 1
}
}
```
Click the orange “Test Query” button, put in your Access Token, click on the environment approval button, and click run!
Your output will look like this:
```
{
"data": {
"cursor": {
"firstBatch":
{
"borough": "Brooklyn",
"cuisine": "American",
"name": "Regina Caterers"
},
{
"borough": "Brooklyn",
"cuisine": "American",
"name": "The Movable Feast"
},
{
"borough": "Brooklyn",
"cuisine": "American",
"name": "Reben Luncheonette"
},
{
"borough": "Brooklyn",
"cuisine": "American",
"name": "Cody'S Ale House Grill"
},
{
"borough": "Brooklyn",
"cuisine": "American",
"name": "Narrows Coffee Shop"
},
…
```
As you can see, you’ve successfully created a complex query that shows you the name of the restaurant, the borough, and the cuisine. You can now commit and deploy this as a custom endpoint in your Neurelo environment and call this API from your applications. Great job!
## To sum things up...
This tutorial has successfully taken you through how to create a Neurelo account, connect your MongoDB Atlas database to Neurelo, explore Neurelo’s API Playground, and even create complex queries using their AI Assistant function. Now that you’re familiar with the basics, you can always take things a step further and incorporate the above learnings in a new application.
For help, Neurelo has tons of [documentation, getting started videos, and information on their APIs.
To learn more about why developers should use Neurelo, check out the hyper-linked resource, as well as this article produced by our very own Matt Asay.
| md | {
"tags": [
"MongoDB",
"Neurelo"
],
"pageDescription": "New to Neurelo? Let’s dive in together. Learn the power of this platform through our in-depth tutorial which will take you from novice to expert in no time. ",
"contentType": "Tutorial"
} | Neurelo and MongoDB: Getting Started and Fun Extras | 2024-05-20T17:32:23.501Z |