1 Mongo DB MongoDB (from "humongous“) is a scalable, high-performance, open source, document-oriented database. Written in C++. Home: http:// www.mongodb.org / Support by http://www.10gen.com/ Production Deploy http://www.mongodb.org/display/DOCS/Pr oduction+Deployments
57
Embed
1 Mongo DB MongoDB (from "humongous) is a scalable, high- performance, open source, document-oriented database. Written in C++. Home:
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Mongo DB
MongoDB (from "humongous“) is a scalable, high-performance, open source, document-oriented database. Written in C++.
Home: http://www.mongodb.org/ Support by http://www.10gen.com/ Production Deploy
1. Databases are specializing - the "one size fits all" approach no longer applies. MongoDB is bettween in-memory key-value and relational persistent database.
2. By reducing transactional semantics the db provides, one can still solve an interesting set of problems where performance is very important, and horizontal scaling then becomes easier. The simpler, the faster.
3. The document data model (JSON/BSON) is easy to code to, easy to manage (schemaless), and yields excellent performance by grouping relevant data together internally. But waste a bit space.
4. A non-relational approach is the best path to database solutions which scale horizontally to many machines. Easy to scale out for in-complex application.
5. While there is an opportunity to relax certain capabilities for better performance, there is also a need for deeper functionality than that provided by pure key/value stores.
3. Aggregation (MapReduce etc): group distinct etc
4. Fixed-size collections: Capped collections are fixed in size and are useful for certain types of data, such as logs.
5. File storage: a protocol for storing large files, uses subcollections to store file metadata separately from content chunks
6. Replication: include master-slave mode and replicate-set mode
7. Security : simple authorization.
8
Agenda
Getting Up to Speed with MongoDB ( Summary : Getting Up to Speed with MongoDB ( Summary : document oriented & schema-free )document oriented & schema-free )
Fri Apr 1 14:37:08 [initandlisten] db version v1.8.0, pdfile version 4.5Fri Apr 1 14:37:08 [initandlisten] git version: 9c28b1d608df0ed6ebe791f63682370082da41c0Fri Apr 1 14:37:08 [initandlisten] build sys info: Linux bs-linux64.10gen.cc 2.6.21.7-
2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41
Fri Apr 1 14:37:08 [initandlisten] waiting for connections on port 27017Fri Apr 1 14:37:08 [websvr] web admin interface listening on port 28017
11
Developing with MongoDB Connect to MongoD
$opt/mongo/bin/mongo
MongoDB shell version: 1.8.0
connecting to: test
autocomplete:PRIMARY> exit
Bye
usage: /opt/mongo/bin/mongo [options] [db address] [file names (ending in .js)]
12
Developing with MongoDB MongoDB Shell
MongoDB comes with a JavaScript shell that allows interaction with a MongoDB instancefrom the command line.
Query – find()
db.c.find() returns everything in the collection c. db.users.find({"age" : 27}) where the value for "age" is 27 db.users.find({}, {"username" : 1, "email" : 1}) if you are interested only in the
"username" and "email" keys db.users.find({}, {"fatal_weakness" : 0}) never want to return the
Developing with MongoDB DML continue : Safe Operation
1. MongoDB does not wait for a response by default when writing to the database. Use the getLastError command to ensure that operations have succeeded.
2. The getLastError command can be invoked automatically with many of the drivers when saving and updating in "safe" mode (some drivers call this "set write concern").
1. wait for any currently running operations or file preallocations to finish (this could take a moment)
2. close all open connections 3. flush all data to disk4. halt.
use the shutdown command
> use adminswitched to db admin> db.shutdownServer();server should be down...
18
Agenda
Getting Up to Speed with MongoDB ( Summary : Getting Up to Speed with MongoDB ( Summary : document oriented & schema-free )document oriented & schema-free )
Developing with MongoDB (Summary: find())Developing with MongoDB (Summary: find()) Advanced Usage ( index & Aggregation, GridFS) Administration ( admin,replication,sharding) MISC
Index continue : change index db.runCommand({"dropIndexes" : "foo", "index" : "alphabet"})
db.people.ensureIndex({"username" : 1}, {"background" : true}) Using the {"background" : true} option builds the index in the
background, while handling incoming requests. If you do not include the background option, the database will block all other requests while the index is being built.
Mapreduce It is a method of aggregation that can be easily parallelized across
multiple servers. It splits up a problem, sends chunks of it to different machines, and lets each machine solve its part of the problem. When all of the machines are finished, they merge all of the pieces of the solution back into a full solution.\
Example: Finding All Keys in a Collection>map = function() {... for (var key in this) {... emit(key, {count : 1});... }};
MongoDB Advanced Usage Capped Collections1. capped collections automatically age-out the
oldest documents as new documents are inserted.
2. Documents cannot be removed or deleted (aside from the automatic age-out described earlier), and updates that would cause documents to move (in general updates that cause documents to grow in size) are disallowed.
3. inserts into a capped collection are extremely fast.
4. By default, any find performed on a capped collection will always return results in insertion order.
5. ideal for use cases like logging.
6. Replication use capped collection as OpLog.
28
MongoDB Advanced Usage GridFS: Storing Files
GridFS is a mechanism for storing large binary files in MongoDB. Why using GridFS
• Using GridFS can simplify your stack. If you’re already using MongoDB, GridFSobviates the need for a separate file storage architecture.
• GridFS will leverage any existing replication or autosharding that you’ve set up forMongoDB, so getting failover and scale-out for file storage is easy.
• GridFS can alleviate some of the issues that certain filesystems can exhibit whenbeing used to store user uploads. For example, GridFS does not have issues withstoring large numbers of files in the same directory.
• You can get great disk locality with GridFS, because MongoDB allocates data filesin 2GB chunks.
29
MongoDB Advanced Usage GridFS: example
$ echo "Hello, world" > foo.txt$ ./mongofiles put foo.txtconnected to: 127.0.0.1added file: { _id: ObjectId('4c0d2a6c3052c25545139b88'),filename: "foo.txt", length: 13, chunkSize: 262144,uploadDate: new Date(1275931244818),md5: "a7966bf58e23583c9a5a4059383ff850" }done!
$ ./mongofiles listconnected to: 127.0.0.1foo.txt 13
$ rm foo.txt$ ./mongofiles get foo.txtconnected to: 127.0.0.1done write to: foo.txt
$ cat foo.txtHello, world
30
MongoDB Advanced Usage GridFS: internal
The basic idea behind GridFS is that we can store large files by splitting them up into chunks and storing each chunk as a separate document.
autocomplete:PRIMARY> show collectionsfs.chunksfs.filessystem.indexes
autocomplete:PRIMARY> db.fs.chunks.find(){ "_id" : ObjectId("4db258ae05a23484714d58ad"), "files_id" : ObjectId("4db258ae39ae206d1114d6e4"), "n" :
mongostat Fields inserts - # of inserts per second query - # of queries per second update - # of updates per second delete - # of deletes per second getmore - # of get mores (cursor batch) per
second command - # of commands per second flushes - # of fsync flushes per second mapped - amount of data mmaped (total data size)
megabytes visze - virtual size of process in megabytes res - resident size of process in megabytes faults - # of pages faults per sec (linux only) locked - percent of time in global write lock idx miss - percent of btree page misses (sampled) qr|qw - queue lengths for clients waiting (read|
write) ar|aw - active clients (read|write) netIn - network traffic in - bits netOut - network traffic out - bits conn - number of open connections
37
DBA on MongonDB
Security and Authentication
1. Each database in a MongoDB instance can have any number of users.
2. only authenticated users of a database are able to perform read or write operations on it.
3. A user in the admin database can be thought of as a superuser
4. Need to start MongoDB with “--auth” option to enable authentication.
38
Backup on MongonDB1. Data File Cold Backup
kill –INT mongod; copy --dbpath2. mongodump (exp) and mongorestore (imp)3. fsync and Lock4. Slave Backup
> use adminswitched to db admin> db.runCommand({"fsync" : 1, "lock" : 1});{"info" : "now locked against writes, use db.$cmd.sys.unlock.findOne() to unlock","ok" : 1}
1. Scale read2. Backup on Slave3. Process data on Slave4. DR
42
Master-Slave Replication
How it works? The Oplogoplog.$main a capped collection in local database.
• ts Timestamp for the operation. The timestamp type is an internal type used to track when operations are performed. It is composed of a 4-byte timestamp and a 4-byte incrementing counter.
• op Type of operation performed as a 1-byte code (e.g., “i” for an insert).• ns Namespace (collection name) where the operation was performed.• o Document further specifying the operation to perform. For an insert,
this would be the document to insert.
1. Slave first starts up, it will do a full sync of the data on the master node.
2. After the initial sync is complete, the slave will begin querying the master’s oplog and applying operations in order to stay up-to-date. “async”
43
Replication on MongonDB
Replication-set
1. A replica set is basically a master-slave cluster with automatic failover.
2. One master, some secondary (slave)
3. One secondary is elected by the cluster and may change to another node if the current master goes down.
44
Replication on MongonDB
Setup Replication-set1. Option --replSet is name for this replica set.
•standard a full copy of data & voting & ready to be primary•passive a full copy of data & voting •arbiter voting & no data replicated
46
MongonDB Auo-sharding
Sharding : splitting data up and storing different portions of thedata on different machines.
1. Manualy sharding: The application code manages storing different data on different servers and querying against the appropriate server to get data back.
2. Auto Sharding : The cluster handles splitting up data and rebalancing automatically.
47
Auto sharding
When to shard?
1. You’ve run out of disk space on your current machine.
2. You want to write data faster than a single mongod can handle.
3. You want to keep a larger proportion of data in memory to improve performance.
1. define how we distribute data. 2. MongoDB's sharding is order-preserving; adjacent data by shard key tends to be on the same
server.3. The config database stores all the metadata indicating the location of data by range: 4. It should be granular enough to ensure an even distribution of data.
Chunks
1. a contiguous range of data from a particular collection. 2. Once a chunk has reached about 200M size, the chunk splits into two new chunks. When a
particular shard has excess data, chunks will then migrate to other shards in the system. 3. The addition of a new shard will also influence the migration of chunks.
50
Sharding a table
Enable sharding on a database db.runCommand({"enablesharding" : "foo"})
Query on sharding assume a shard key of { x : 1 }.
52
Sharding machine layoutAvoid single failure
53
Review
Getting Up to Speed with MongoDB ( document oriented and schema-free )
Developing with MongoDB (find()) Advanced Usage ( Tons of features) Administration ( Easy to
admin,replication,sharding) MISC (BJSON;internal)
54
Misc
1. BSON
2. Datafiles layout
3. Memory-Mapped Storage Engine
55
Misc1 BSON (Binary JSON)
a lightweight binary format capable of representing any MongoDB document as a string of bytes.
BSON is the format in which documents are saved to disk. When a driver is given a document to insert,
use as a query, and so on, it will encode that document to BSON before sending it to the server.
Goals:Efficiency Traversability Performance
56
Datafiles layout & Memory-Mapped Storage Engine
1. The numeric data files for a database will double in size for each new file, up to a maximum file size of 2GB.
2. Preallocates data files to ensure consistent performance
3. Memory-Mapped storage Engine
4. When the server starts up, it memory maps all its data files.5. OS is to manage flushing data to disk and paging data in and out.6. MongoDB cannot control the order that data is written to disk, which
makes it impossible to use a writeahead log to provide single-server durability.
7. 32-bit MongoDB servers are limited to a total of about 2GB of data per mongod. This is because all of the data must be addressable using only 32 bits.