Lotus what? Nah mate, the world has moved on.

Tuesday, 5 January 2016

LDC Via: how does it work?



LDC Via offers a comprehensive programming model to make the most of your organisation’s data: a RESTful API provides secure access to everything you need, and you can build a new front-end to your data using whatever technology you wish—our own LDC Via Lens offering lets you build a simple interface with no code at all!

Whether migrated using our web application or a local installation of the Java utility, your data is safe and sound. All content is stored in a resilient, backed-up MongoDB instance, with our application layer (and its in-built security) on top.

There are many reasons why an organisation would want to take advantage of the possibilities offered by LDC Via. Here are some of the more common use-cases:

  • Speed up existing Domino applications by off-loading data to the LDC Via platform.
  • Move data from Domino into a document store more suited to an organisation’s defined IT architecture plans with minimal upheaval.
  • Manage the “push” of data from inside the firewall to the outside world (e.g. for suppliers and clients).
  • Use LDC Via as an alternative document store for data that would otherwise languish in an application or environment which has been retired.
  • A simple application archive solution: LDC Via’s document store coupled with its web-based data viewer and query engine make for an effective archive (especially combined with our standard application templates).


LDC Via: how does it work?:

Sunday, 6 December 2015

Bits or pieces?: Open source as weapon

Open source as weapon

I often talk about the use of an open play as weapon, a mechanism of changing the environment you exist in. Back in 2011, when I examined the phenotypic changes of companies (due to a particular cycle known as peace, war and wonder - more details here) there was a pronounced change in attitude in the next generation of companies from the use of open means for simply cost reduction to the provision of open means (source, APIs, data) to manipulate the market (see table 1).

Table 1 - Changes of phenotype from traditional to next generation.

In the following year, I examined the level of strategic play (based upon situational awareness) versus the use of open means to change a market and noticed a startling effect. Those who showed high levels of situational awareness and used open approaches to change a market showed positive market cap changes over a seven year period (see figure 1).

Figure 1 - Changes of market cap with Strategic Play vs Open approaches.

I've talked extensively about the use of Wardley mapping and how you can use this to identify where you might attack a market. I've also provided examples such as "Fool's mate" which is one of about 70+ different forms of repeatable gameplay that can be applied to a given context. In Fool's mate, you attack the underlying system with an open approach in order to gain a change in a higher order system. More details are provided here but figure 2 provides an outline.

Figure 2 - Fool's mate in business.

However, even if you do have reasonably high levels of situational awareness, you understand where to attack and how to manipulate a market through open approaches then this is not the end of the story. The use of open approaches is not a fire and forget system. You can't just declare this thing as "open" or create an "open consortium" and walk away.

Unfortunately, far too often in corporations I've seen people believe that "open" is synonymous with "free resources" and that somehow by opening up a system that the community will be so grateful for the gift that they'll just adopt it and success is assured. This is la la fantasy land. The majority of today's great tech corporations owe their success to the open source community, they are built on it and they do themselves no favours to disrespect the community in such a way.

If you decide to attack a market and you decide to use open as the means of changing it (what I like to call "open by thinking") then when you launch you'll going to have to put time, effort, money, resources and skill into making it happen. Hence the importance of "open by thinking" because going open should be seen as an investment decision which can pay handsomely if done right. There are numerous pitfalls to be avoid e.g. antagonising or not building a community, not listening to a community, not acting as a benevolent dictator for that community or failing to put its interests over your own, failing to steer a clear direction, failing to invest and worst of all establishing a collective prisoner dilemma.

You are the custodian, the gardener, the benevolent dictator of the community you hope to create. The act of throwing some code over the wall, creating an open source consortium and running a social media campaign on how "good" you are is a long way off from what you need to be doing. It is more akin to the self aggrandising but absentee landlord who claims the lack of tenants for their unsafe flats is purely because people renting properties are ungrateful.

Yes, you can use open approaches as a weapon to change the market in the battle between companies but it's a weapon that requires skill, dedication, investment and care.

BTW, I do love Chapter 12 - How to tell if a FLOSS [e.g. open source] project is doomed to FAIL. Hat tip to Jan Wildeboer for that one.

Bits or pieces?: Open source as weapon: I often talk about the use of an open play as weapon, a mechanism of changing the environment you exist in. Back in 2011, when I examined t...

Friday, 13 November 2015

NVIDIA® Jetson™ TX1 Supercomputer-on-Module Drives Next Wave of Autonomous Machines | Parallel Forall

Today NVIDIA introduced Jetson TX1, a small form-factor Linux system-on-module, destined for demanding embedded applications in visual computing.  Designed for developers and makers everywhere, the miniature Jetson TX1 (figure 1) deploys teraflop-level supercomputing performance onboard platforms in the field.  Backed by the Jetson TX1 Developer Kit, a premier developer community, and a software ecosystem including Jetpack, Linux For Tegra R23.1, CUDA Toolkit 7, cuDNN, and VisionWorks, Jetson enables machines everywhere with the proverbial brains required to achieve advanced levels of autonomy in today’s world.
Aimed at developers interested in computer vision and on-the-fly sensing, Jetson TX1’s credit-card footprint and low power consumption mean that it’s geared for deployment onboard embedded systems with constrained size, weight, and power (SWaP).  Jetson TX1 exceeds the performance of Intel’s high-end Core i7-6700K Skylake in deep learning classification with Caffe, and while drawing only a fraction of the power, achieves more than ten times the perf-per-watt.
Jetson provides superior efficiency while maintaining a developer-friendly environment for agile prototyping and product development, removing extra legwork typically associated with deploying power-limited embedded systems. Jetson TX1’s small form-factor module enables developers everywhere to deploy Tegra into embedded applications ranging from autonomous navigation to deep learning-driven inference and analytics.

Jetson TX1 Module

Built around NVIDIA’s 20nm Tegra X1 SoC featuring the 1024-GFLOP Maxwell GPU, 64-bit quad-core ARM Cortex-A57, and hardware H.265 encoder/decoder, Jetson TX1 measures in at 50x87mm and is packed with performance and functionality. Onboard components include 4GB LPDDR4, 16GB eMMC flash, 802.11ac WiFi, Bluetooth 4.0, Gigabit Ethernet, and accepts 5.5V-19.6VDC input (figure 2).  Peripheral interfaces consist of up to six MIPI CSI-2 cameras (on a dual ISP), 2x USB 3.0, 3x USB 2.0, PCIe gen2 x4 + x1, independent HDMI 2.0/DP 1.2 and DSI/eDP 1.4, 3x SPI, 4x I2C, 3x UART, SATA, GPIO, and others.  Needless to say, Jetson TX1 stands tall in the face of many an algorithmic and integration challenge.
Figure 2. Jetson TX1 block diagram. Blocks on the outside indicate typical routing on the carrier.
Figure 2. Jetson TX1 block diagram. Blocks on the outside indicate typical routing on the carrier.
The Jetson module utilizes a 400-pin board-to-board connector (figure 3) for interfacing with the Developer Kit’s reference carrier board, or with a bespoke, customized board designed during your productization process.  Tegra’s chip-level capabilities and I/O are closely mapped to the module’s pin-out.  The pin-out will be backward-compatible with future versions of the Jetson module.  Jetson TX1 comes with an integrated thermal transfer plate (figure 3), rated between -25°C and 80°C, for interfacing with passive or active cooling solutions.  Consult NVIDIA’s Embedded Developer Zone for thorough documentation and detailed electromechanical specifications, in addition to visiting the active and open development community on Devtalk.
Figure 3. Left to right: Top of Jetson TX1 module, bottom (with connector), and complete assembly with TTP.
Figure 3. Left to right: Top of Jetson TX1 module, bottom (with connector), and complete assembly with TTP.
Jetson TX1 draws as little as 1 watt of power or lower while idle, around 8-10 watts under typical CUDA load, and up to 15 watts TDP when the module is fully utilized, for example during gameplay and the most demanding vision routines.  Jetson TX1 provides exceptional dynamic power scaling either based on workload via its automated governor, or by explicit user commands to gate cores and specify clock frequencies. The four ARM A57 cores automatically scale between 102 MHz and 1.9 GHz, the memory controller between 40MHz and 1.6GHz, and the Maxwell GPU between 76 MHz and 998 MHz.  Touting 256 CUDA cores with Compute Capability 5.3 and Dynamic Parallelism, Jetson TX1’s Maxwell GPU is rated for up to 1024 GFLOPS of FP16.  When combined with support for up to 1200 megapixels/sec from either three MIPI CSI x4 cameras or six CSI x2 cameras, along with hardware H.265 encoder & decoder, integrated WiFi and HDMI 2.0, Jetson TX1 is primed for all-4K video processing. The Jetson TX1 module retails for $299 and has 5-year availability. In addition to releasing the ecosystem tools, NVIDIA has made available the Jetson TX1 Developer Kit to help users get started today.

Jetson TX1 Developer Kit

NVIDIA’s Jetson TX1 Developer Kit includes everything you need to get started developing on Jetson. Including the pre-mounted module, the Jetson TX1 Developer Kit (figure 4) contains a reference mini-ITX carrier board, 5MP MIPI CSI-2 camera module, two 2.4/5GHz antennas, an active heatsink & fan, an acrylic base plate, and a 19VDC power supply brick.
Figure 4. Jetson TX1 Developer Kit, including module, reference carrier and camera board.
Figure 4. Jetson TX1 Developer Kit, including module, reference carrier and camera board. (Click image to zoom)
The PCIe lanes on the Jetson TK1 Developer Kit are routed from the module to a PCIe x4 desktop slot on the carrier for easy prototyping, in addition to an M.2-E mezzanine with PCIe x1 for wireless radios.  Available on the Embedded Developer Zone, NVIDIA shares the schematics and design files for the reference carrier along with the 5MP CSI-2 camera module, including routing and signal integrity guidelines.  Board software support bundled by Jetpack provides easy flashing and device configuration.  Out of the box, the Jetson TX1 Developer Kit provides the experience of a desktop PC, but in a small embedded form factor that only draws a fraction of the power.  The Jetson TX1 Developer Kit is available for pre-order immediately for $599, with shipments beginning November 16 in the US and December 20 in Europe and APAC.
Select researchers had the chance to review the Jetson TX1 Developer Kit in the lead-up to launch.   MIT professor Dr. Sertac Karaman and his autonomous robotics lab worked hands-on with the new kit, upgrading their self-driving RACECAR from their previous Jetson TK1 setup.  Figure 5 shows their autonomous vehicle in action.
In addition to their autonomous RACECAR powered by Jetson TX1, Dr. Karaman’s lab at MIT is behind other projects that utilize Jetson for autonomy, as well.  In collaboration with MIT Media Lab’s Changing Places group on the Persuasive Electric Vehicle (PEV), their self-driving tricycle provides autonomous transport of pedestrians and packages in urban environments—and is also powered by Jetson.   Leveraging the ecosystem, the students at MIT quickly prototyped their projects and benefited from the flexible development environment and performance afforded by Jetson TX1.

Jetpack and Linux For Tegra R23.1

The software ecosystem for Jetson is extensive, and Jetpack simplifies software configuration and deployment.  Jetpack automates the installation process on Jetson to include all the tools and drivers for development.   Jetpack 2.0 is provided for Jetson TX1.  This version of Jetpack bundles Linux For Tegra (L4T) R23.1, Tegra System Profiler 2.4 and Graphics Debugger 2.1, PerfKit 4.5.0, and OpenCV4Tegra.  L4T R23.1 ships with U-Boot and Linux 3.10.64 aarch64 kernel, alongside the Ubuntu 14.04 armhf filesystem.  Recent improvements in L4T include gstreamer 1.6 extensions with hardware support for H.265, an improved nvgstcapture sample for testing the camera module, and integrated support for WiFi & Bluetooth.
L4T R23.1 includes support for full desktop OpenGL 4.5, allowing full-on Linux gaming/VR experience in addition to simulation.  OpenGL ES 3.1 is also provided.  This release includes OpenCV4Tegra, enabling users to transparently utilize NEON SIMD extensions from the standard OpenCV interface.  A video tutorial series on OpenCV is available through the Embedded Developer Zone.

CUDA 7 and cuDNN/Caffe

Jetpack 2.0 includes the CUDA Toolkit version 7.0,p with CUDA 7.5 coming in a future release. CUDA 7.0 unleashes Jetson TX1’s integrated Maxwell GPU.  Maxwell, with Compute Capability 5.3, supports Dynamic Parallelism and higher performance FP16.  The many uses for Dynamic Parallelism in embedded applications include point cloud processing & tree partitioning, parallel path planning & cost estimation, particle filtering, RANSAC, solvers, and many others.
One of the highlights of the Jetson software ecosystem is an incredible deep learning toolkit built on CUDA, providing Jetson with onboard inference and the ability to apply reasoning in the field. Included is NVIDIA’s cuDNN library, adopted by multiple deep learning frameworks including Caffe.
We ran a power benchmark using the Caffe AlexNet image classifier, comparing  Jetson TX1 to an Intel Core i7-6700K Skylake CPU. The table shows the results. Read more about these results in the post Inference: The Next Step in GPU-Accelerated Deep Learning”.
platformimg / sPower (AP+DRAM)Perf/wattEfficiency versus i7-6700K
 Intel i7-6700K24262.5W3.881x
 Jetson TX12585.7W4511.5x
Kespry Designs, a Silicon Valley industrial drone developer, is using deep learning on Jetson TX1 to provide inference on construction sites for asset tracking of equipment and materials. This takes the tiresome, human-intensive work out of looking after assets and on-site logistical planning.  Due to the low SWaP and computational capability of Jetson TX1, Kespry plans to migrate processing onboard Unmanned Aerial Vehicles instead of offline in the datacenter, shortening response times for tasks like inspection and triage.  See a short video about them in Figure 6.
Kespry developed their proof-of-concept on the Jetson TX1 Development Kit in just a few weeks.  The prototype uses a Caffe model trained to recognize and count different classes of construction equipment.  Using Jetson TX1, Kespry is now deploying this previously offline process in real-time onboard their drone.  Jetson is able to transfer resource-intensive tasks once performed in a datacenter onboard mobile platforms, thereby closing the loop on response and improving quick-reaction capabilities, creating new opportunities for companies like Kespry.


Jetson TX1 marks the first release of VisionWorks available to developers through Jetpack 2.0 and theEmbedded Developer Zone. Built on Khronos Group’s OpenVX standard for power-efficient vision processing, VisionWorks provides primitives and building blocks that are highly optimized for Tegra using tuned CUDA kernels. Figure 7 shows the results of benchmarks that we ran on Jetson TX1, profiling the differences between VisionWorks and OpenCV.
Figure 5. Benchmarks demonstrate the large speedup of VisionWorks vs. OpenCV running on the Jetson TX1 CPU and GPU.
Figure 5. Benchmarks demonstrate the large speedup of VisionWorks vs. OpenCV running on the Jetson TX1 CPU and GPU.
VisionWorks is more than 10x faster than upstream CPU-only OpenCV, is 4.5x faster than OpenCV4Tegra with NEON extensions, and is 1.6x faster than OpenCV’s GPU module.   The Overall Computer Vision Score was collected from the geometric mean performance of all the overlapping primitives between OpenCV and VisionWorks.  Each primitive was measured across image sizes 720p and larger, and across all permutations of argument parameters.
In addition to more than 50 filtering, warping, and image-enhancement primitives, VisionWorks also offers numerous higher-level building blocks as well, such as LK optical flow, stereo block-matching (SBM), Hough lines & circles, and Harris (Corner) feature-detection & tracking.   VisionWorks provides a full implementation of OpenVX 1.1.  Developers can leverage VisionWorks to deploy camera-ready algorithms and vision pipelines, already tuned for Jetson.
Get VisionWorks today on NVIDIA’s Embedded Developer Zone.

Jetson TX1: A Rich Development Platform

The NVIDIA Jetson ecosystem is rich with tools and support for enabling your research and development of applications and products with Jetson TX1. In the larger scheme, NVIDIA software toolkits for accelerated computing, deep learning, computer vision, and graphics are portable from the datacenter to the workstation to embedded SoC (figure 8), allowing enterprise users to seamlessly scale and deploy their applications to devices in the field.   Using Jetson, developers can leverage NVIDIA’s shared architecture and power-efficient technology to roll out high-performance embedded systems with ease and flexibility.
Figure 6. Jetson taps into the NVIDIA ecosystem to deliver unprecedented scalability and developer-friendly support.
Figure 6. Jetson taps into the NVIDIA ecosystem to deliver unprecedented scalability and developer-friendly support.
Adept at hosting core processing capabilities alongside learning-driven inference and reasoning, Jetson TX1 represents the ultimate in performance and efficiency for powering your device with the next wave of autonomy. With shipments of Jetson TX1 Developer Kit beginning November 16, secure your pre-order today.  And let us know about the amazing things you create using Jetson!

NVIDIA® Jetson™ TX1 Supercomputer-on-Module Drives Next Wave of Autonomous Machines | Parallel Forall:

'via Blog this'

Wednesday, 4 November 2015

Call to Open Source OS/2 Software - OS2World Community


These days, with the announcement of Blue Lion, I think it is important to keep supporting open source software and recognize the benefit of this model.open source logo

For several years I had been running an ongoing campaign to open source as much as OS/2 software as can be possible. I want to request your help by contacting any former OS/2 developers and request to make their software open source so it can benefit the OS/2 user community.

Any former OS/2 developer that want to open source his software is welcome to contact me (Martin) to talk about which will be the best license that fit your needs and assist him/her on anything under my possibilities


We all know the ill-fated history of IBM's OS/2 Warp, while some others may not know about the first OS/2-OEM distribution called eComStation. Now a new company called Arca Noae, not happy with the results of this last distribution, has signed an agreement with IBM to create a new OS/2 version. They announced a new OS, codenamed "Blue Lion," at Warpstock 2015 this last October; this will be based on OS/2 Warp 4.52 and the SMP kernel. The OS/2 community has taken this news with positivism and the OS2World community is now requesting everybody that has developed for OS/2 on the past to open source their source code to collaborate.

Call to Open Source OS/2 Software - OS2World Community:

'via Blog this'

Tuesday, 13 October 2015

Dell. EMC. HP. Cisco. These Tech Giants Are the Walking Dead | WIRED

HP. CISCO. DELL. EMC. IBM. Oracle. Think of them as the walking dead.
Oh, sure, they’ll shuffle along for some time. They’ll sell some stuff. They’ll make some money. They’ll command some headlines. They may even do some new things. But as tech giants, they’re dead.
This was driven home in wonderfully complete fashion this past Wednesday, thanks to a trio of events. If you don’t follow the seemingly uninteresting, enormously lucrative, and, in fact, endlessly fascinating world of enterprise computing—computing that helps run big businesses—you may have missed them all. But they were big news in the enterprise world. And together, they show just how dead those giants really are.
First, Pure Storage, a Silicon Valley startup that sells a new kind of hardware for storing large amounts of digital data,made its Wall Street debut. Later in the day, The Wall Street Journal reported that big-name computer tech company Dell was in talks to buy EMC, a storage outfit that’s much older and much larger than Pure Storage (the deal was announced this morning). And during an event in Las Vegas, Amazon introduced a sweeping collection of new cloud computing services that let you juggle vast amounts of data without setting up your own hardware.
That may seem like a lot to wrap your head around, but the story is really quite simple. For decades, if you were building a business and you needed to store lots o’ data, EMC was your main option. You gave the company lots o’ money, and it gave you some hefty machines packed with hard disks and some software for storing data on those hard disks. The trick was that you could only get that software from EMC. So, anytime you wanted to store more data, you gave EMC more money. This made the company very rich.
But then little companies like Pure Storage came along and sold storage gear built around flash, a much faster alternative to hard drives, letting you juggle more data more quickly and, potentially, for less money. But more importantly, cloud computing companies like Amazon came along, letting you store data on their machines. These machines sat on the other side of the Internet, but you could access them from anywhere, at any time. That meant you didn’t have to buy hardware from EMC or anyone else.
That’s the subtext as EMC, once a giant of the tech world, merges with Dell, a company that isn’t exactly on the rise. Dell, in fact, suffers from the same conundrum as EMC—a conundrum that grew so onerous, Dell went private. This conundrum also plagues HP. And IBM. And Cisco. And Oracle. As Bloomberg Business feature writer, Elon Musk biographer, and unparalleled Silicon Valley hack Ashlee Vance puts it: “Why don’t IBM, HP, EMC, Dell and Cisco all merge and get this thing over with?”
What is this conundrum? Well, we’ll let Vance explain that too. When someone asked what we should call that IBM-HP-EMC-Dell-Cisco merger, his response was wonderfully descriptive. He suggested we call the company Fucked By The Cloud.

[Redacted] by the Cloud

The Cloud. The term has taken on so many meanings in recent years. But keep in mind: most of these meanings come from IBM, HP, EMC, Dell, Cisco, and other companies that don’t want to be fucked by it. The best way to think about The Cloud is this: It’s the way that the giants of the Internet—aka Amazon, Google, and Facebook—build their businesses.
These companies built Internet businesses so large—businesses that ran atop hundreds, thousands, even tens of thousands of computers—they eventually realized they couldn’t build them with hardware and software from established vendors. They couldn’t use traditional storage gear from EMCThey couldn’t use servers from Dell and HP and IBMThey couldn’t use networking gear from Cisco.They couldn’t use databases from Oracle. It was too expensive. And it couldn’t scale. That’s another buzzword. It means “helping an online operation achieve world domination.”
So, Amazon and Google and Facebook built a new breed of hardware and software that would scale quite nicely. They built their own servers, their own storage gear, their own networking gear, their own databases and other software for juggling information across all this hardware. They streamlined their hardware to make it less expensive, and in some cases, they sped it up, moving from hard disks to flash drives. They built databases that juggled data using thememory subsystems of dozens, hundreds, or even thousands of machines—subsystems that can operate even faster than flash.

The Sharing Game

But they didn’t keep this stuff to themselves. They shared it. Now, all the stuff that Amazon and Google and Facebook built is trickling down to the rest of the world. That’s important, because, as time goes on and the Internet expands, so many other businesses will scale like Amazon and Google and Facebook. Many already are.
Amazon is now offering up its own infrastructure to this world of businesses. Literally. That’s what a cloud computing service is. Google is doing the same. And Facebook, more than anyone, has released both its software and its hardware designs to the world at large, so that others can build their own operations in much the same way. This is called open source.
With help from these open source designs and the general example of the Internet giants, an army of up-and-coming enterprise vendors are offering hardware and software that operates a lot like the stuff Amazon and Google and Facebook have built. This includes not only storage vendors like Pure Storage, but server makers like Quanta and networking outfits like Cumulus Networks and Big Switch. Myriad software makers, such as MemSQL and MongoDB, sell databases based on designs from Facebook and Google and Amazon.
All this is why IBM, HP, EMC, Dell, and Cisco are fucked. Yes, they can offer their own cloud computing services. They can offer software and hardware that works like the stuff Facebook has open sourced. And to a certain extent, they have. But the competition now stretches far and wide. And if they go too far with new cloud services and products, they’ll cannibalize their existing businesses. This is called the innovator’s dilemma.

The Larry Ellison Effect

Yes, this conundrum plagues Oracle too. The Oracle empire is funded by expensive databases that don’t scale. The difference is that Oracle has built a sales team that can force businesses into buying anything—even if it makes no economic sense. This is called The Iron Fist of Larry Ellison.
Oh, and it plagues another venerable tech company: Microsoft. The difference here is that Microsoft has morequickly and adeptly moved into the world of cloud computing. Like Amazon and Google and Facebook, it runs its own massive Internet services, including Bing. That means it too has been forced to build its own data center hardware and software. And it has done an unusually good job of challenging Amazon with its own cloud computing services. This is called Microsoft Azure.
Of course, Microsoft suffers from other problems too. One of its biggest money makers is the Windows operating system, for instance, and a relatively small number of people use Windows on smartphones, tablets, and other devices of the future. This is called Fucked By Mobile.

Who’s Not [Redacted]?

Who’s not fucked? Well, Pure Storage is looking better than EMC. That said, its IPO wasn’t exactly a home run. And it still sells stuff that you have to install in your own data center. Gear like this will always have a place in the world. But the future of enterprise computing, it has become increasingly clear, lies with cloud computing services. And that means it lies with Amazon.
Amazon is by far the world’s largest cloud computing operation. Its cloud services are where so many businesses and coders go to run software and store data. And last week, the company continued its efforts to take this model still further—to offer up not just raw processing power and raw storage but also its own databases and data analytics tools and other software services. If you use Amazon, you don’t need servers and other hardware from Dell and HP and EMC and Cisco—and you don’t need databases from Oracle and IBM.
Luckily, Amazon has some competition in the cloud computing world. That would be Google and Microsoft. The others are also-rans. HP and Oracle and IBM and the rest will imitate Amazon. But they’re too far behind—and carry too much baggage—to catch up. Google and Microsoft can put some heat on Amazon. In fact, Microsoft is further along than Google. So, in short, we’re really pulling for Fucked By Mobile.
Update: This story has been updated with the news that Dell and EMC have indeed merged.

Dell. EMC. HP. Cisco. These Tech Giants Are the Walking Dead | WIRED:

'via Blog this'

Friday, 25 September 2015

Nathan Freeman presents Graphs in Action #MWLUG2015 IV102

This is great concept with a rubbish name. Graphs? Really?

It seems to be a natural fit for Lotus Domino databases though. This may be of great interest to some of the die hard Lotus aficionados out there.



  1. 1. The Graph revolution How to change the way you think about NSFs and achieve Nirvana Nathan T Freeman - #ChiefArchitect @RedPillDevelopment
  2. 2. Mission productive problems mind
  3. 3. The Numbers Problem Thousands of data silos (NSFs) Hundreds of indexes in each Thousands of documents in each
  4. 4. The Logic Problem Data schemas in the UI Limited serialization Relationships are a lot of work
  5. 5. What is a graph? Elements (vertexes and edges) Key/Value pairs Index-free adjacency
  6. 6. Why use graphs? Speed Scalability Intuitive
  7. 7. People graph Nathan knows Mac
  8. 8. Movie graph portrays appearsIn stars The Matrix Keanu Reeves Neo
  9. 9. What is an nsf?  Documents  Item-value pairs  Appalling bad indices
  10. 10. Graph & NSF
  11. 11. Openntf domino api Documents with keys (Serializable -> MD5 -> UNID) Auto-type coercion Document implements Map<String, Object> includes Document.get(“fname + ” ” lname”)
  12. 12. A Single nsf with… Hundreds of thousands of vertices Millions of edges
  13. 13. A question… If each Vertex is a Document, why can’t every Document be a Vertex?
  14. 14. The dream Tens of millions of enterprise documents. Decades of accumulated knowledge. One big warehouse. No migration required.
  15. 15. Implementation 2.0 Vertices need models; models are hard. Graph must consume many NSFs UniversalID not enough; need MetaversalID Can’t modify some Vertices
  16. 16. Tinkerpop.frames @TypeField("form") @TypeValue("Person") public interface User extends VertexFrame { • @TypedProperty("FirstName") • public String getFirstName(); • @TypedProperty("FirstName") • public void setFirstName(String firstName); • @IncidenceUnique(label = “likes”) • public Iterable<Edge> getLikes(); • @IncidenceUnique(label = “likes”) • public Edge addLikes(Vertex vertex); }
  17. 17. Graph sharding One graph can have many element stores (NSFs) Element stores based on Frame interfaces Stores respect ACLs and can cross servers Can store vertexes and/or edges Proxy shards separate graph data from core properties
  18. 18. Metaversalids ReplicaID + UniversalID 16 char hex + 32 char hex = bulky 16 char hex = 64-bit number AKA long long[3] can hold same information NoteCoordinate (x,y,z) Stores as byte[24]
  19. 19. The Numbers Problem Thousands of data silos (NSFs) Hundreds of indexes in each Thousands of documents in each Millions of vertexes across the enterprise No indexes needed
  20. 20. The Logic Problem Schemas are defined with Java interfaces Anything can be written to any key/value pair Relationships are trivial
  21. 21. Mission productive problems mind

Wednesday, 23 September 2015

A new breed of database hopes to blend the best of NoSQL and RDBMS - TechRepublic

Big data
 Image: iStock

Relational databases like Oracle still dominate revenue. MongoDB dominates adoption. And Cassandra dominates scale.
So, what does that leave the graph database?
Though graph databases don't get as much press as their document and columnar-style peers, they fill a useful role within the enterprise. By giving developers a way to express relationships between data rather than fixating on the data itself, graph databases offer a powerful new way to tame the growing complexity of big data.
To better understand graph databases and how they fit into the broader database market, I sat down with Luca Olivari, my former MongoDB colleague and CEO of OrientDB, a leading multi-model database that spans graph, document, and relational databases.
TechRepublic: What are graph databases and how do they differ from other NoSQL databases like MongoDB (document), Cassandra (wide column), etc.?
Olivari: Graph databases are optimized for managing highly related data and complex queries. Focus is on the relationships rather than the data itself. A graph database stores persistent direct links (joins, if you will) that can be queried efficiently, and response times are independent of the total size of the dataset.
Document databases are a great fit to manage complex and ever changing data, but they lack functionalities to relate documents and model relationships. Likewise, wide column stores are scalable, but they don't provide features to connect data. On the other hand, relational databases compute relationships at run time, and response time increases with the size of the dataset.
TechRepublic: Speaking of document databases, you spent some time at MongoDB. What led you to leave MongoDB for OrientDB?
OlivariI have fond memories of my time at MongoDB and learned a lot building the international business with the experienced management team there.
That said, my choices are always based on three basic variables: Products, People, and Market Opportunity. I've always been intrigued by OrientDB as a product, as it solves some of the problems that people are facing when using MongoDB and first-generation NoSQL products.
OrientDB allows you to connect JSON documents using direct links taken from the graph theory. Furthermore, it makes it easy to adapt legacy applications that are using SQL. It's schema-full, schema-less or hybrid, and it supports ACID transactions. That creates a huge market opportunity, and we have a talented team to go after it.
TechRepublic: What's a typical use case for OrientDB? That is, when would you be a fool not to use a graph database for a particular business need?
Olivari: Relationships are as important as the data itself in today's connected world. Use cases that require modeling complex relationships are the best for graph databases. As such, Real Time Recommendation, Fraud Detection, Master Data Management, Social Networks, Network Management, Geolocalized Apps and Routing, Blockchain, Internet of Things, Identity Management, and many others come to mind.
TechRepublic: You used the word "multi-model." Is it this aspect of OrientDB, or something else, that sets OrientDB apart among graph databases like Neo4j?
Olivari: OrientDB is a distributed graph database where every vertex and edge is a JSON document. In other words, OrientDB is a native multi-model database and marries the connectedness of graphs, the agility of documents, and the familiar SQL dialect.
The combination of graphs and documents simplifies the architecture, removing the need to keep multiple databases synchronized, and the SQL dialect reduces the learning curve when moving from legacy relational products to next generation databases.
TechRepublic: So, you're not a vanilla graph database at all. Will a graph database ever have the same adoption as a document database like MongoDB or a columnar database like Cassandra? Or are graph databases suited for a smaller universe of potential applications?
Olivari: The addressable market for pure graph databases is relatively smaller, but we're by no means talking about a small one. Forrester Research estimates that graph databases will reach over 25% of all enterprises by 2017, and that represents a multi-billion dollar market opportunity.
PwC technology forecast defines graph databases as "The least mature NoSQL type, but the most promising," and I tend to agree. Adopting graph database can give a competitive advantage to data-driven organizations. The value that graphs can bring to companies is not widely known yet, but that's the fastest growing category, according to db-engines.
OrientDB is the leading multi-model database, and our addressable market includes document, graph, and relational.
TechRepublic: Back to graph databases. Should we be thinking about how graph databases complement other databases? In fact, is this how we should be thinking about modern data: that there is no one right database for every need, and so developers should always look to apply the right database tool to a particular business need?
Olivari: We often hear the term "polyglot persistence," which means using multiple databases to solve different problems. That's a side effect of NoSQL products that are addressing only a subset of the data management issues.
Using a polyglot approach is not always the best choice, as it forces developers to support more than one database and synchronize them to satisfy their requirements.
There will always be databases that solve a niche problem extremely well, but we need a solution to become the operational datastore of the modern enterprise. Multi-model databases, in general, and OrientDB, in particular, have what it takes to become a viable alternative to RDBMS and succeed first generation NoSQL products

A new breed of database hopes to blend the best of NoSQL and RDBMS - TechRepublic:

'via Blog this'

Total Pageviews

Google+ Followers

Popular Posts

Recent Comments

Rays Twitter feed



Powered by Blogger.