Now this gets interesting. Apparently MongoDB will get a BI connector - which seems to be a Multicorn foreign data wrapper for PostgreSQL!
While the step is logical in some way given MongoDBs limited built-in analytical capabilities vs. what PostgreSQL can do by declaration, e.g. CTEs, window functions or TABLESAMPLE, this also could backfire badly. Well, I'm almost convinced it will backfire.
PostgreSQL already has 'NoSQL' capabilities like native JSON and HSTORE k/v, there is ToroDB, emulating a wire protocol compatible MongoDB on top of PostgreSQL. There is already work on Views (Slide 34) in ToroDB, which will enable Users to query documents stored in ToroDB not only with the MongoDB query language but also with SQL, thus seamlessly integrating ToroDB document data with plain PostgreSQL relational data.
Then, there is no reason to use MongoDB at all, except maybe data ingestion speed. Data ingestion in ToroDB is way slower than with a 'real' MongoDB, but this is being worked on.
And from my experience in a current project, with a bit of anticipatory thinking, PostgreSQL data ingestion speed can at least challenge MongoDB, with security, integrity, transactions and all - on a server with 1/4 the CPU cores than the Mongo-Server has.
So, the wolf and the lamb will feed together, and the lion will eat straw like the ox... - there are truly interesting times ahead. :-)