Skip to content
START FOR FREE
START FOR FREE
  • SUPPORT
  • COMMUNITY
  • CONTACT US
  • SUPPORT
  • COMMUNITY
  • CONTACT US
MENUMENU
  • Products
    • The World’s Fastest and Most Scalable Graph Platform

      LEARN MORE

      Watch a TigerGraph Demo

      TIGERGRAPH CLOUD

      • Overview
      • TigerGraph Cloud Suite
      • FAQ
      • Pricing

      USER TOOLS

      • GraphStudio
      • Insights
      • Application Workbenches
      • Connectors and Drivers
      • Starter Kits
      • openCypher Support

      TIGERGRAPH DB

      • Overview
      • GSQL Query Language
      • Compare Editions

      GRAPH DATA SCIENCE

      • Graph Data Science Library
      • Machine Learning Workbench

      Success Plans

  • Solutions
    • The World’s Fastest and Most Scalable Graph Platform

      LEARN MORE

      Watch a TigerGraph Demo

      Solutions

      • Solutions Overview

      INCREASE REVENUE

      • Customer Journey/360
      • Product Marketing
      • Entity Resolution
      • Recommendation Engine

      MANAGE RISK

      • Fraud Detection
      • Anti-Money Laundering
      • Threat Detection
      • Risk Monitoring

      IMPROVE OPERATIONS

      • Supply Chain Analysis
      • Energy Management
      • Network Optimization

      By Industry

      • Advertising, Media & Entertainment
      • Financial Services
      • Healthcare & Life Sciences

      FOUNDATIONAL

      • AI & Machine Learning
      • Time Series Analysis
      • Geospatial Analysis
  • Customers
    • The World’s Fastest and Most Scalable Graph Platform

      LEARN MORE

      CUSTOMER SUCCESS STORIES

      • Ford
      • Intuit
      • JPMorgan Chase
      • READ MORE SUCCESS STORIES
      • Jaguar Land Rover
      • Xbox
  • Partners
    • The World’s Fastest and Most Scalable Graph Platform

      LEARN MORE

      PARTNER PROGRAM

      • Partner Benefits
      • TigerGraph Partners
      • Sign Up
      TigerGraph partners with organizations that offer complementary technology solutions and services.​
  • Resources
    • The World’s Fastest and Most Scalable Graph Platform

      LEARN MORE

      BLOG

      • TigerGraph Blog

      RESOURCES

      • Resource Library
      • Benchmarks
      • Demos
      • O'Reilly Graph + ML Book

      EVENTS & WEBINARS

      • Events &Trade Shows
      • Webinars

      DEVELOPERS

      • Documentation
      • Ecosystem
      • Developers Hub
      • Community Forum

      SUPPORT

      • Contact Support
      • Production Guidelines

      EDUCATION

      • Training & Certifications
  • Company
    • Join the World’s Fastest and Most Scalable Graph Platform

      WE ARE HIRING

      COMPANY

      • Company Overview
      • Leadership
      • Legal Terms
      • Patents
      • Security and Compliance

      CAREERS

      • Join Us
      • Open Positions

      AWARDS

      • Awards and Recognition
      • Leader in Forrester Wave
      • Gartner Research

      PRESS RELEASE

      • Read All Press Releases
      TigerGraph Debuts TigerGraph CoPilot for Graph-Augmented AI, New Cloud-Native Generation of TigerGraph Cloud, and Solution Kits
      April 30, 2024
      Read More »

      NEWS

      • Read All News

      Best paper award at International Conference on Very Large Data Bases

      New TigerGraph CEO Refocuses Efforts on Enterprise Customers

  • START FREE
    • The World’s Fastest and Most Scalable Graph Platform

      GET STARTED

      • Request a Demo
      • CONTACT US
      • Try TigerGraph
      • START FREE
      • TRY AN ONLINE DEMO

Exploring Reddit Marketing Networks with Graph Databases

  • Emily McAuliffe
  • March 17, 2021
  • blog, Developers, Emerging Use Cases
  • Blog >
  • Exploring Reddit Marketing Networks with Graph Databases

Can we discover astroturfing, marketing, and self-promotion networks using graph databases?

Originally posted by Daniel Ward on Medium.

Exploring Reddit Marketing Networks with Graph Databases

If you’re short on time and only interested in the tech and results, skip to the “What is a graph database?” section below… But I recommend you read the lot for context.

What is the scenario?

I’m an avid Reddit user. More of a lurker than a contributor, but I’ve visited Reddit multiple times daily for around seven years now. As a result, I’m very familiar with astroturfing, which can briefly be described as (from Wiki):

the practice of masking the sponsors of a message or organization to make it appear as though it originates from and is supported by grassroots participants.

It is prevalent. Efforts both on and off Reddit have increased in recent years to tackle this, most clearly seen through the introduction of laws making it explicitly clear that social media influencers must declare when something is a paid advertisement. In spite of this, the rules are often flouted, and it can be hard to determine when something is a paid endorsement or a genuine endorsement.

When it comes to Instagram, Facebook, Snapchat, and the majority of social networks, it is clear who the individual posting is (provided, of course, they are a real person at all). Being a visible and open person grows trust and makes people more likely to have faith in your recommendations, paid or not. This is why influencers are seen as such a concern, as their good-natured recommendation might actually be a thinly veiled advertisement.

The term “astroturfing” comes from the fact that AstroTurf makes a product designed to look like the real thing, and is also a play on “grassroots”, as in the quote higher up the page.

Reddit is slightly odd in this sense. Users are typically anonymous, with many (myself included) operating multiple accounts for different types of content. Likewise, I have never once come across a submission or comment on the site which has made clear that it is an advertisement or sponsored post (except for in certain IAMA posts). Astroturfing isn’t widely discussed on the platform, but it is a recognized concern.

Unlike with Instagram, tackling astroturfing on Reddit is hard. You don’t know who the users are, the majority of the content is in the comments (of which there are a lot), it’s very fast to set up a new account, and there’s nothing stopping you from commenting on your own content.

Reddit has one more strange, cultural quirk. People are in equal parts trusting, and untrusting. Take the r/TIFU subreddit (a subsite within Reddit, known as a Subreddit, where people post real stories starting “Today, I fucked up”). Some relatively mundane stories are posted here and some truly fantastical ones (of which, the latter are much more well-received). In either case, you can hop into the comments of any post and find a mix of people calling the submission “fake” and likely many more accepting the post for what it is.

There’s nothing illegal about posting a fake story to r/TIFU, provided you’re not advertising something, even if that is a bit disingenuous. But there’s plenty of content that is advertising more broadly on the site.

The Trigger

I came across a video on the r/funny subreddit which, to be honest, wasn’t particularly funny. And I found the music on it completely out of place. In fact, I thought the music was so out of place that I checked the comments to see if other people felt the same way. Many people did, though a minute number were interested in the music. In fact, the OP (“Original Poster”, the person who submitted the video) posted their own comment with a link to the Soundcloud page for the music.

Of all the astroturfing I’ve come across on Reddit, this was the most blatant. Opening OP’s profile revealed many repetitions of this same trick in the exact same format. Take a funny video, pop the music over it, post it and make a comment pointing to the Soundcloud.

Very clear to see their approach with just a quick overview of their profile’s comments.

Exploring deeper, I found some of the comments pointing to the Soundcloud existed on submissions by other users, who had also posted videos with the music track on it. And likewise, some of those users had commented the same way back to our original user. There were also a handful of comments from accounts made on the same day as the comment itself, with no other comments or posts, simply asking “What is the music?”, to which the OP would readily reply with a link to the Soundcloud.

This is very clear astroturfing, and more than likely, what was happening was:

  • The poster makes a submission and comments on it, linking to Soundcloud.
  • The poster logs into other accounts, promoting the post* and leaving comments.
  • Some comments ask about the music, to which the submitting user account readily replies with a link.

*It’s also worth noting that using multiple accounts to promote your own submissions is strictly against Reddit’s T&Cs, and they are known to crack down hard on users breaching these rules.

So, where does this leave us? We can be almost certain a single user is using multiple accounts to promote a musician (likely themselves), commenting dishonestly to appear trustworthy and using multiple accounts to the same aim. There is potentially some vote manipulation going on, though this is much harder to determine from our end.

Let’s investigate.

What is a graph database?

Before we dive into the solutions, let’s just quickly run through what a graph database is. Graph databases are databases that exist as specific points (entities), with connections between these points. Whilst they have many uses, one of the best and oldest uses is in discovering hidden connections. This approach has been used in banking for decades in order to detect fraud and other unscrupulous individuals. It is even used by HMRC as part of their Connect system to find tax evasion. I can vouch for how impressive it was because I used to work in one of the Connect teams, but I digress…

This is the most basic type of graph database, with only a single type of node and a single, bi-directional set of relationships.

The idea of a graph database is relatively simple on the face of it. Entities will often have relationships with other entities. For instance, if Bill knows Jane, and Jane knows David, but David doesn’t know Bill, we can determine three entities and two relationships between this small group.

With this, we can then explore the graph. If we were to ask “Who is a friend of a friend of David?”, we could quickly and easily find Bill. Graph databases allow us to ask this question directly, as opposed to in a traditional relational database which would likely require at least one join. The aim of this post, however, isn’t to go into deep detail about the difference between RDBMS and graph databases, so I’ll leave that to your own research.

For our scenario, you can hopefully see how identifying the connections between users might help us discover networks of accounts astroturfing.

Scraping Profiles with PRAW

Here’s the plan. To begin with, we identify a Reddit account we think is astroturfing. Then, we find all of their posts and comments. For each of their posts, we find who made a comment on it, and for each of their comments, who made the post the comment appeared on. We then repeat that logic for those newly discovered people, and again for the even more newly discovered people, and so on.

In theory, we will end up with a network of closely linked accounts, with some of the more distant hops being more closely connected than we might expect based on where they’ve made comments.

To get this data, we’re going to use the Reddit API. PRAW (Python Reddit API Wrapper) is a very handy Python library that provides us easy access to the API. The logic we’re going to use it best shown in the diagram below:

Starting from the bottom left, we set our initial target and the number of “hops” to go through. Then, we extract all their posts and comments. Until we reach the maximum hop number, we then find all the users associated with this content and do the same for them. We export all the collected data to a JSON file. 

The overall code to do this, which effectively only relies on the PRAW library, comes in at around 200 lines of code.

There are a few things to be aware of with this code, not least in the approach we’ve taken. The problem with finding every related user, and then finding theirs, means that each hop increases the data volume exponentially.

From an analysis of the scraped profiles, it looks like the real average number of posts is around 17, and real average number of comments is around 30.

It’s clear to see, the more hops we go through, the exponentially greater volumes of data we’re going to end up with. Further, and much more frustratingly for the volume of data we need, Reddit limits you to 60 API calls per minute. PRAW goes further and enforces a minimum 2-second wait between API calls. So we can process 30 API calls per minute in reality.

For how the code works, we must submit a separate API call for every single post and comment by a user in order to find the connected users. We can immediately limit this by not scraping the same item twice (multiple people might comment on the same post), but we also limit each user to their most recent 100 posts and 100 comments. This is still a vast quantity of data, and means that processing a single user could take over 6 and a half minutes in a worst-case scenario. *See addendum below

To reduce needless processing, as mentioned before, we avoid scraping the same content or user twice. In reality, users are the big ones here, as they will require many API calls. This is as simple as recording who has already been processed and then doing a check before any API calls relating to them. We do the same for individual posts, in case multiple people comment on the same post.

The suspended user’s content can easily be found through a quick Google.

One quirk that appeared on the first multi-hop run of this script was that some users were generating an error in PRAW. On closer inspection, it turns out these users had been suspended. Suspended users may retain all their comments and posts on Reddit, but their profile becomes completely inaccessible.

We also want to make sure we record the data in a useful state. In an attempt to make this as simple as possible, the code initializes an object for each user, which contains their relevant data and methods to manipulate the data attributed to them.

For context at this point: From starting with one user with just 4 posts and 12 comments, after two hops we’re up to over 37,000 users, with a data file clocking in just under 500MB.

Building a Graph Database

The model we’re going to use. The only absolutely critical parts to making this work really are the User’s name and the Post’s ID. Everything else is just flavor and detail, such as Karma, which is Reddit’s way of saying how well-received someone’s contributions to the site are. See the addendums for comments on this.

In building a graph database, there’s an element of working backward. We need to know what we want to achieve before we go and get the data. In the interest of giving you a flow of development, I haven’t really mentioned the data model, but the image above gives you an indication. We want to go from users to any posts they’ve made or commented on (or both). The easiest way to do this is to create a single CSV for each of the above parts: A user table, a posts table, and a connections table (which also includes the name of the user and PostID so we can link them together).

Each individual connection between a user and a post will have its own line. The only really quirky part here is the Comments JSON field. This just means that this field will contain a JSON markup with any comments made. This is important, as we only want one row per User -> Post connection, but a user can make many comments on a single post. This isn’t critical for our graph database to work, it just means we can extract some extra detail on their types of comments more quickly and easily.

Our graph database software of choice here is going to be TigerGraph. It offers a handy developer edition, though this is admittedly limited to 500MB of data and struggles with large graph visualizations. Plus, I have a particular fondness for GSQL as a graph query language, especially over Gremlin. So, in step 1, we define our graph schema.

Users, whether they are commenting or submitting a post in the first place, must always make it on a particular piece of content.

Voila! It really is as simple as that for us. At the absolute lowest level, this is what our graph is trying to do. Who posted on what? We’re actually not treating comments or posts separately because the connection would look the same, just with a different word saying “commented_on” instead.

Next, we need to attach our data. We upload our data file and start telling TigerGraph which fields of our data related to which features in our graph. For instance, a username in the data relates to the User bubble. A post ID (puid) in the data relates to our Post bubble. The edge connecting the two bubbles is a special case and takes both of those fields. This special connection is where the magic begins in a graph database.

If you’re curious why the fields don’t match the image table above, see the addendums at the bottom of this post.

On the left of the screen, we tell TigerGraph where we want to make a connection between our data and our graph schema. On the right, we give it the specifics. We repeat this process for the Content vertex (on the Post ID), and again for the posted_on line (which takes two entries, one being the username and the other the Post ID).

Onto step 3: This is really easy. We just say that everything is good and, yes, we want to build a graph database on what we’ve set up. Hit run, let it do its thing (it’s surprisingly fast, even when running just on my 8GB MacBook), and check the stats.

You can see the rate of load in the bottom right graph, showing it took about 30 seconds overall.

We’ll linger here for just a moment to look at those stats. 2,033,637 vertices, of which 17,642 are users and the rest posts. This was from only two hops worth of data! We also have over twice as many edges as vertices, which is a good sign for finding internal connections.

So, let’s get to the good bit.

Exploring the Graph

We move onto stage 4, visually exploring the graph, the final stage of what we’ll be looking at (although there is a stage 5, which is writing GSQL queries, which is beyond what we need to do for now).

To begin with, I’m going to ask TigerGraph to return the user we started with, the cause of this entire project.

One lonely bubble. Normally, TigerGraph shows us relevant information, which would include the username here, but I’ve told it not to in order to save me a big job blurring everything.

And now, we double-click on this bubble. And we get…

Pop!

All their posts, and posts they’ve commented on! So, for a final check, let’s just quickly double-click a post and see what happens.

When we double-click on a bubble, TigerGraph adds all connected data and highlights any connected data. So above, the highlighted parts to the right are all new, and we can see the post I double-clicked on is connected to our original user!

Amazing. – looks like it all works. So, we’re going to go one bigger. We can also tell TigerGraph to, starting from our origin user, expand out to any relevant posts. Then from there, to any relevant users. Then again to their posts, and so on. We actually only need to do this process a small number of times before we duplicate our hops. There is one catch though: This version of TigerGraph can only show a certain amount of data. So we’re going to tell it to sample and bring back most, but not all, data. We could miss useful things this way, so in a production environment, this wouldn’t be appropriate. Alas, let’s go ahead and see what we get.

Bit messy though.

And this is what we get. We’ve asked TigerGraph to bring back the first 50 bits of relevant content for our origin user (for which, there are only 15 pieces anyway, which we know from when we double-clicked earlier), and then bring back 50 users attached to each of those posts. Luckily for us, the actual volumes here are small. But, it’s not particularly nice to look at. TigerGraph has a handful of options for changing how the data is shown. If we choose the circle mode, we get something nicer.

Pretty! It would be useful if there was a way to filter out where vertices only have a single edge attached, but we’d need the GSQL query mode for this which is beyond the scope of this article.

Amazingly, the circle mode almost perfectly suits what we’re trying to do. We’re now seeing something really useful. Each of those lines dictates where a user has commented on something. What we’re looking for is where users (blue bubbles) have lots of lines. In reality, we’re simply looking for where there’s more than one line.

Zooming in on the centre and bringing in posts (red bubbles) with multiple connections to our central trio makes it easier to see the network.

We have three users here that stand out. They have a number of connections between them, which on viewing more closely we would discover are always a mix of where one of them has posted and another account has commented. We also have a single post which all three accounts commented on (the bottom-right bubble).

But why is this suspicious? Well, look at the image posted before we zoomed in. Of all those blue bubbles, only three have more than a single connection. Reddit is a huge website, it’s deeply uncommon for people to comment on the same things, even on highly popular posts, and we can see from the small number of users (when we asked for 50 per post) that this is not particularly popular content.

So what now?

Findings and Conclusions

This alone isn’t enough to determine astroturfing. Far from it. But I can assure you that looking into these accounts guarantees it (and, as I showed right back at the start with the Soundcloud comments image, we already know it’s happening). But it does suggest graph databases could be used to find astroturfing activity with relative ease.

There are a few counter-arguments. What if friends comment on each others’ content? What about really, wildly popular posts? What if there are ‘Reddit celebrity’ users that always attract the same audience to their content? This approach isn’t perfect, but we can use more traditional methods to filter this out. Rules like “Only show me users with connections to more than a single post” and “Whitelist these users to ignore them and their content, as we know they’re okay” would be a massive help.

The biggest issue I’ve had with this, by far, is getting the data. There’s just so much and the ability to access it through the API so slow that I can’t possibly get enough. However, if you already had all this information and could dive straight into the graph database side of things… (cough cough, Reddit admins, cough).

This approach does work. It’s surprisingly simple to do. In fact, for me, the hardest part was getting the data in the first place. It isn’t a be-all and end-all, you still need to take the output from your graph and investigate it.

During the course of this project, I made a lot of twists and turns, and I endeavor that you read the addendums if you have questions. If you have questions I haven’t covered, do put them in the comments section.

Thanks for reading. It’s been a surprisingly long road putting this together. Let me know in the comments if you’ve got ideas on how you could use graph databases, or where I missed building in functionality for this Reddit tool!

Addendum for Unanswered Questions

What do you mean by Post and Posted? How can a commenter Post on a Post? What is OP?

I will be brutally honest that this is my own laziness and familiarity with Reddit-speak. On Reddit, you have submissions and comments. Submissions are links to content or text posts which start a conversation. You reply to submissions with comments, and you can reply to comments with other comments. Submissions are often called “Posts”, and the person who made that Post is called the “OP”, or “Original Poster”. Because our graph model only tracks posts as nodes, irrelevant of whether the edge was by the OP or a commenter, all content has been made on a Post (Submission). I might say an OP posts a post and a comment is posted on a post, but what I mean is a user submits a submission, a comment is commented on a submission. It’s best not to overthink this, it’s a simple concept I’m over-explaining.

How come the graph section doesn’t match your outlined data?

You might have noticed that, despite writing out at the start of the “Building a graph database” section the format we’d be using, that isn’t used. After about two months of playing around with this idea and trying to refine it from my initial build, I realised I’d actually already achieved what I was trying to do, which was to prove Graph Databases could be used to find astroturfing on Reddit. All the time I spent on it after was refining it, making it more functional, increasing the data collection capabilities, and so on. It was coming back to this in the new year I suddenly realised that, unless I want to become an expert in using PRAW, I wasn’t gaining anything from this. So the graph section used my first code iteration’s data, purely to showcase what can be done.

You talk about Comments JSON, but then don’t use it?

See the above explanation.

Why do you put everything in a JSON file when the Graph Database uses a CSV?

For some reason, I assumed TigerGraph could happily use JSON. This is not true for the developer edition, which demands CSV. Rather than go back and rewrite how I was recording the data, I decided to simply convert my JSON to a CSV.

6 and a half minutes for one user? Why not access their profile directly?

It’s true that we could scrape a single user’s profile very quickly and easily; we simply send one API request for their username and pull back all their comments and submissions. However, from each one of those, we then want all the people who commented on the content or were responsible for the original post. Because of this, we then have to run another API request for each individual comment and submission to get back the detail we need; the original request doesn’t contain it. If, somehow, we knew all the users we would need to scrape, we could simply run their profiles alone and save hundreds (or thousands) of API requests… Alas, the entire point of the scraping code is to find where people are connected, so we can’t make much progress in this space without the many API requests. Of course, if you happened to work for Reddit, this wouldn’t be an issue…

If everyone is anonymous, why have you blurred everything?

On Reddit, everyone gets to hide behind a username. But if you choose to put up certain details, it can be trivial to find out the real person behind it. Some people want this (celebrities, for instance). In this scenario, I’m picking on one particular person and their Reddit usage, talking quite poorly about their music, and using them to highlight a bad practice done in a poor way. They also link directly to their own Soundcloud. If, perchance, you work for Reddit and are interested in the specifics, you’re welcome to message my API account directly, u/fk4kg3nf399. I’d also pop a comment below to let me know you’ve done this, as I don’t really log into it.

You Might Also Like

Graph Developer Proficiency Rating

Graph Developer Proficiency Rating

June 16, 2024
Supply Chain Digital Twins Enable Analytics and Resiliency

Supply Chain Digital Twins Enable Analytics...

May 29, 2024
Putting the Customer First: The Power of the Empty Chair

Putting the Customer First: The Power...

May 17, 2024

Emily McAuliffe

TigerGraph Blog

  • Categories
    • blogs
      • Customer 360
      • Cybersecurity
      • Developers
      • Digital Twin
      • Engineers
      • Fraud / Anti-Money Laundering
      • GQL
      • GSQL
      • Supply Chain
      • TigerGraph
      • TigerGraph Cloud
    • Graph AI On Demand
      • Customer Spotlight
      • Digital Transformation, Management, & Strategy
      • Finance, Banking, Insurance
      • Graph + AI
      • Graph Algorithms
      • Retail, Manufacturing, and Supply Chain
    • RulesEngine
    • Video
  • Recent Posts

    • Graph Developer Proficiency Rating
    • Supply Chain Digital Twins Enable Analytics and Resiliency
    • Welcome to ENGAGE 2024!
    • Putting the Customer First: The Power of the Empty Chair
    • Join TigerGraph at ENGAGE 2024: Advancing Financial Crime Solutions
    TigerGraph

    Product

    SOLUTIONS

    customers

    RESOURCES

    start for free

    TIGERGRAPH DB
    • Overview
    • Features
    • GSQL Query Language
    GRAPH DATA SCIENCE
    • Graph Data Science Library
    • Machine Learning Workbench
    TIGERGRAPH CLOUD
    • Overview
    • Cloud Starter Kits
    • Login
    • FAQ
    • Pricing
    • Cloud Marketplaces
    USEr TOOLS
    • GraphStudio
    • TigerGraph Insights
    • Application Workbenches
    • Connectors and Drivers
    • Starter Kits
    • openCypher Support
    SOLUTIONS
    • Why Graph?
    industry
    • Advertising, Media & Entertainment
    • Financial Services
    • Healthcare & Life Sciences
    use cases
    • Benefits
    • Product & Service Marketing
    • Entity Resolution
    • Customer 360/MDM
    • Recommendation Engine
    • Anti-Money Laundering
    • Cybersecurity Threat Detection
    • Fraud Detection
    • Risk Assessment & Monitoring
    • Energy Management
    • Network & IT Management
    • Supply Chain Analysis
    • AI & Machine Learning
    • Geospatial Analysis
    • Time Series Analysis
    success stories
    • Customer Success Stories

    Partners

    Partner program
    • Partner Benefits
    • TigerGraph Partners
    • Sign Up
    LIBRARY
    • Resources
    • Benchmark
    • Webinars
    Events
    • Trade Shows
    • Graph + AI Summit
    • Million Dollar Challenge
    EDUCATION
    • Training & Certifications
    Blog
    • TigerGraph Blog
    DEVELOPERS
    • Developers Hub
    • Community Forum
    • Documentation
    • Ecosystem

    COMPANY

    Company
    • Overview
    • Careers
    • News
    • Press Release
    • Awards
    • Legal Terms
    • Patents
    • Security and Compliance
    • Contact
    Get Started
    • Start Free
    • Compare Editions
    • Online Demo - Test Drive
    • Request a Demo

    Product

    • Overview
    • TigerGraph 3.0
    • TIGERGRAPH DB
    • TIGERGRAPH CLOUD
    • GRAPHSTUDIO
    • TRY NOW

    customers

    • success stories

    RESOURCES

    • LIBRARY
    • Events
    • EDUCATION
    • BLOG
    • DEVELOPERS

    SOLUTIONS

    • SOLUTIONS
    • use cases
    • industry

    Partners

    • partner program

    company

    • Overview
    • news
    • Press Release
    • Awards

    start for free

    • Request Demo
    • take a test drive
    • SUPPORT
    • COMMUNITY
    • CONTACT
    • Copyright © 2024 TigerGraph
    • Privacy Policy
    • Linkedin
    • Twitter

    Copyright © 2020 TigerGraph | Privacy Policy

    Copyright © 2020 TigerGraph Privacy Policy

    • SUPPORT
    • COMMUNITY
    • COMPANY
    • CONTACT
    • Linkedin
    • Facebook
    • Twitter

    Copyright © 2020 TigerGraph

    Privacy Policy

    • Products
    • Solutions
    • Customers
    • Partners
    • Resources
    • Company
    • START FREE
    START FOR FREE
    START FOR FREE
    TigerGraph
    PRODUCT
    PRODUCT
    • Overview
    • GraphStudio UI
    • Graph Data Science Library
    TIGERGRAPH DB
    • Overview
    • Features
    • GSQL Query Language
    TIGERGRAPH CLOUD
    • Overview
    • Cloud Starter Kits
    TRY TIGERGRAPH
    • Get Started for Free
    • Compare Editions
    SOLUTIONS
    SOLUTIONS
    • Why Graph?
    use cases
    • Benefits
    • Product & Service Marketing
    • Entity Resolution
    • Customer Journey/360
    • Recommendation Engine
    • Anti-Money Laundering (AML)
    • Cybersecurity Threat Detection
    • Fraud Detection
    • Risk Assessment & Monitoring
    • Energy Management
    • Network Resources Optimization
    • Supply Chain Analysis
    • AI & Machine Learning
    • Geospatial Analysis
    • Time Series Analysis
    industry
    • Advertising, Media & Entertainment
    • Financial Services
    • Healthcare & Life Sciences
    CUSTOMERS
    read all success stories

     

    PARTNERS
    Partner program
    • Partner Benefits
    • TigerGraph Partners
    • Sign Up
    RESOURCES
    LIBRARY
    • Resource Library
    • Benchmark
    • Webinars
    Events
    • Trade Shows
    • Graph + AI Summit
    • Graph for All - Million Dollar Challenge
    EDUCATION
    • TigerGraph Academy
    • Certification
    Blog
    • TigerGraph Blog
    DEVELOPERS
    • Developers Hub
    • Community Forum
    • Documentation
    • Ecosystem
    COMPANY
    COMPANY
    • Overview
    • Leadership
    • Careers  
    NEWS
    PRESS RELEASE
    AWARDS
    START FREE
    Start Free
    • Request a Demo
    • SUPPORT
    • COMMUNITY
    • CONTACT
    Dr. Jay Yu

    Dr. Jay Yu | VP of Product and Innovation

    Dr. Jay Yu is the VP of Product and Innovation at TigerGraph, responsible for driving product strategy and roadmap, as well as fostering innovation in graph database engine and graph solutions. He is a proven hands-on full-stack innovator, strategic thinker, leader, and evangelist for new technology and product, with 25+ years of industry experience ranging from highly scalable distributed database engine company (Teradata), B2B e-commerce services startup, to consumer-facing financial applications company (Intuit). He received his PhD from the University of Wisconsin - Madison, where he specialized in large scale parallel database systems

    Todd Blaschka | COO

    Todd Blaschka is a veteran in the enterprise software industry. He is passionate about creating entirely new segments in data, analytics and AI, with the distinction of establishing graph analytics as a Gartner Top 10 Data & Analytics trend two years in a row. By fervently focusing on critical industry and customer challenges, the companies under Todd's leadership have delivered significant quantifiable results to the largest brands in the world through channel and solution sales approach. Prior to TigerGraph, Todd led go to market and customer experience functions at Clustrix (acquired by MariaDB), Dataguise and IBM.