Monday, November 14, 2011

Visualization of Social Network behind #OccupyWallStreet Twitter Hashtag

The following visualization was created using Microsoft NodeXL and the 'Group in a Box' method to show clusters within the OccupyWallStreet Hashtag social network.


Nodes are sized according to the their In-degree, or the number of times someone has mentioned that user in a tweet. The images for each node are the actual profile image for each user.

This image contrasts a similar image created by Marc Smith. I'm unsure of the difference in the image is due to the fact that his data is from 10/8/2011 or that I'm still a bit new to the new Group in a Box feature of NodeXL. I don't see many other ways to cluster the nodes so I'm inclined, at this moment, to say the difference in the network structure is due to a shift in the composition of the network due to the many events surrounding the 'Occupy' movement over the past month.

The data for this visualization was captured over a 22-hour period from the evening of 11/12/2011 - the afternoon of 11/13/2011. Previous blog posts show how to use Python and MongoDB to store and parse this data.

For comparison purposes, the below image is the same network without the Group in a Box clustering method applied:

What to do with my Twitter data once it is in my MongoDB?





My previous blog showed us how we can use Python, pycurl, pymongo, MongoDB and the Twitter Streaming API to import all tweets of a certain hashtag into our database. Once we have all of that data, how can we parse it so we can effectively use it? My last example collected the entire tweet.

Tweets, though limited to only 140 characters, are actually large when you observer the entire JSON object. (Recall the API returns the Tweet as a JSON object). An example tweet shows the large JSON structure. There is a lot of information in a Tweet so capturing the entire thing is worthwhile, especially since it is just a a few bytes of storage/tweet. However we'll need to parse each tweet to analyze the structure of our dataset. I won't get into the specifics of how to use JSON or the entire Twitter JSON object, but one will have to have have a general understand of how to use JSON to fully understand how we go about the example show below.

So, let's say we want to map the social network at play for a twitter database. We would want to extract the userID of the tweeter and whatever other users they tweet about. We can query the Database for certain fields of each tweet. We will want the entities.mentions.screen_names[array] and the user.screenname string. We'll loop through all of our tweets and print out a list of edges that would otherwise form a social graph. In this example if a user does not tweet about anyone, I still capture the tweet and show the link as a self-loop in order to capture the Out-Degree (for network analysis reasons) of each 'Tweeter.

So, the sample code would be:
import pymongo
import json

from  pymongo import Connection
connection = Connection()
db = connection.occupywallstreet
print db.posts.count()
for post in db.posts.find({}, {'entities.user_mentions.screen_name':1, 'user.screen_name':1}).sort('user.screen_name', 1):
    if len(post['entities']['user_mentions']) == 0:
        print post['user']['screen_name'], post['user']['screen_name'] 
    else:
        for sname in post['entities']['user_mentions']:
            print post['user']['screen_name'], sname['screen_name']
buffer = ""
for post in db.posts.find({}, {'user.profile_image_url_https':1, 'user.screen_name':1}).sort('user.screen_name', 1):
    if buffer == post['user']['screen_name']:
        continue
    print post['user']['screen_name'], post['user']['profile_image_url_https']
    buffer = post['user']['screen_name']

It's pretty straight forward. We connect to the database, perform a query where we only return the screen_names of those a users mentions and the screen_name of the tweeter himself. This is accomplished with the following line:


for post in db.posts.find({}, {'entities.user_mentions.screen_name':1, 'user.screen_name':1}).sort('user.screen_name', 1):

The
.sort('user.screen_name', 1)
sorts the output so you have all of the activity per user in order.

The last loop gives me the image of the Twitter user. My end goal is to visualize this network in NodeXL and I will want to use the profile_image of the user as the shape of the node. Thus I iterate over all users and capture the profile_image_url_https value for each user with the following block of code:

for post in db.posts.find({}, {'user.profile_image_url_https':1, 'user.screen_name':1}).sort('user.screen_name', 1):
    if buffer == post['user']['screen_name']:
        continue
    print post['user']['screen_name'], post['user']['profile_image_url_https']
    buffer = post['user']['screen_name']

When it is all said and done I have all edges of my network along with URL's of the profile_image for each user in the database that Tweets.

Up next I'll share some visualizations I created with data I gathered using these methods.

Saturday, November 12, 2011

How to use Twitter's Filtered Streaming API, Python and MongoDB






When I started doing this I didn't see anywhere on the Internet that had the entire solution to the following problem:

"Track a hashtag following in Twitter and place into a MongoDB via Python".

For example, I needed to grab all Tweets that had the #occupywallstreet hashtag in them and place them in a Mongo Database using Python.

Why MongoDB?  It's easy, efficient and perfect for storing/performing queries on a large number of documents.  When the documents are Tweets encoded as JSON documents, it's even easier.

Why Python?  I had never used Python before but found nice and simple Twitter and MondoDB plugins to make this EASY.

So, to get to the meat of the problem, here is the code:

import pycurl, json
import pymongo

STREAM_URL = "https://stream.twitter.com/1/statuses/filter.json"
WORDS = "track=#occupywallstreet"
USER = "myuser"
PASS = "mypass"

def on_tweet(data):
    try:
        tweet = json.loads(data)
        db.posts.insert(tweet)
        print tweet
    except:
        return

from pymongo import Connection
connection = Connection()
db = connection.occupywallstreet
conn = pycurl.Curl()
conn.setopt(pycurl.POST, 1)
conn.setopt(pycurl.POSTFIELDS, WORDS)
conn.setopt(pycurl.HTTPHEADER, ["Connection: keep-alive", "Keep-Alive: 3000"])
conn.setopt(pycurl.USERPWD, "%s:%s" % (USER, PASS))
conn.setopt(pycurl.URL, STREAM_URL)
conn.setopt(pycurl.WRITEFUNCTION, on_tweet)
conn.perform() 

We're relying on the REST API from Twitter to return our Tweets.  The same options we are sending to pycurl produce the same effects as if we had run the following command on the command prompt:

"curl -d track=#occupywallstreet http://stream.twitter.com/1/statuses/filter.json -umyuser:mypass"

The line:
db = connection.occupywallstreet
is where we make the connection to the Mongo Database. This requires that I have MongoDB up and running and have created a database called occupywallstreet. The command:
db.posts.insert(tweet)
places the JSON object into the database. You can then query and search for tweets using MongoDB queries. Please see Querying - MongoDB for more information on how to query the database and MongoDB for general MongoDB information.

You have to install the pycurl and mongodb plugins for Python. There are various ways to do this. I used 'easy_install' to simply download and install them with essentially no effort.

A key point to making this code run without fault is found in the function on_tweet. Looking at the callback function we have to make our code resilient to the possible noise that can come back from Twitter. If you're ever run 'curl' from the command line you will occasionally see the API return blank lines. We need to account for these blank lines and other non-JSON values the API might return.
def on_tweet(data):
    try:
        tweet = json.loads(data)
        db.posts.insert(tweet)
        print tweet
    except:
        return

I print out all tweets just so I can verify the program continues to run. I don't follow the tweets but if I fail to see tweets streaming across my terminal I know something went wrong.

And thus in just 27 Python lines we have a nice program that stores all tweets containing the #occupywallstreet hashtag into a Mongo Databse.

Monday, September 19, 2011

Why nothing will replace Facebook (least not for a while)

Though I'm quite motivated in many aspects of my life:
  • Olympic marathon trials qualifier in marathon (extreme amount of work)
  • MS/PHD program while working full time
I'm still VERY lazy.  I do not think I am alone.  Especially when it comes to technology and using web-based applications.  I'm hardly motivated to find all of the features of gmail, Facebook and other such entities.  I usually just wait until I stumble across someone who wants to show me how well they have mastered these tools and pick up tips.

For example, gmail has a multi-sign-on feature that allows you to sign on as multiple users at once.  I have two gmail email addresses.  I do not use this feature.  I have safari and Mozilla on my laptop and keep one account signed in within each browser, effectively signing in both account via two windows.  Why do I do this?  Because I do not want to figure out how to use their service.  I started to read it, got tired after 5 seconds and decided my way in life was sufficient.

But I digress.  Slightly.

I do not think I am alone with my aversion to learn all of the features of a product I already use.  I'm tech-savy and I hate doing it if I'm not motivated at times.  I currently have 3 email addresses.  Part of this is due to work where I cannot check my personal email at work and checking my work email at home is a pain.  This I am forced to live 2 separate email lives.  It should stop there.  But I have a gmail account that I am slowly converting to for all email traffic.  But (and this is embarrassing.  So embarrassing that my professor tried to kick me out of class for admitting it today) I still have an AOL account.

WHY?

Because in order to move off of AOL completely I need to email EVERYONE I know and ensure they never send email to my AOL account.  I'd also have to move saved mail from AOL to gmail, move contacts, update automatic notifications I have that I actually use online (I could set up a forward but then I'd get all of the spam my AOL account also receives as well) and occasionally check my AOL for important emails from family that I need to get.   We all have family members that just don't get the 'I changed email addresses' bit, right?

So it's a pain.  And laborious.  And that is what it would take to migrate off of Facebook.  All of my friends are on Facebook.  All of my pictures are on Facebook.  Facebook is now also an email system of sorts and I (gasp!) save emails there in a effort to keep some important things vice write them down.  If I migrate to google+ I, and all of my friends, would have to re-friend everyone, copy all pictures I might want, establish the same lists via circles and re-create my virtual life.

But I'm lazy.  I'm not going to do it.  Maybe the rest of the Comp. Sci. students in the world might decide to do this, but they're not.  They're not as lazy as I am and they're not doing it.  Even if G+ is better.  Even if the UI is faster, more intuitive, integrates into the rest of the Google platform and blows Facebook out of the water.  There is something to be said about being 'first' in certain social computing aspects.  Facebook did overtake MySpace, but I believe MySpace did a poor job covering the different aspects of what social media needed to do.

MySpace didn't provide the API Facebook did and let's be honest, it was a little shady.  The default 'browsing' feature was to search for women between 18 and 35 that were single, viruses spread like the plague on Myspace and it was WAY too easy to assume the identity of another person (a group of my friends all decided to be Chuck one day on Myspace.  They all had the same profile pic, info, name, music and background.  Unless you knew the actual URL of the real Chuck, you never knew who you were talking to).

But Facebook incorporates any and every improvement Google+ makes to Facebook in a quick fashion, allowing users to have all features of both products.  So, until the lazy folks like myself die off from this planet or a social network that is tracked inside our conscious brains is invented I do not see the masses leaving Facebook anytime soon.

Sorry Google, I really liked Google+

:(



For an interesting visualization of the different lives I live via email please see this:  https://wiki.cs.umd.edu/cmsc734_11/index.php?title=Confessions_of_my_email

Monday, March 14, 2011

No Pi day today. Need to wait another 3 months, 14 days....

Professor Kruskal pointed out to my class a few years ago that someone had written a paper about how pi is wrong:  http://www.math.utah.edu/~palais/pi.pdf

The author points out that if pi were 6.28, things might be simpler or more elegant...

For example, when you took trigonometry you learned that one full rotation of a circle is 2*pi.  Thus that cyclical graph we know as sin(x) and cos(x) would look the same but the x-axis would just be multiplied by a factor of two.  After all, humans invented pi and the value.  Maybe it's tacky to suggest that one circle should allow pi(in radians)=pie(shape).  Or maybe we gave it a number before we really understood what pi was really about.  Let's dig a little deeper and explore some other things Mr. Palais points out:

Currently:        cox(x+pi) = -cos(x)
If pi=6.28:       cos(x+pi) = cos(x)


Currently:   roots of unity e^(2*pi/n) = 0, 1, ..n-1
If pi=6.28   roots of unity e^(pi/n) = 0, 1, ..., n-1

And my all time favorite (the most beautiful equation ever written):

e^(pi * i) = -1

Now if we let pi = 6.28 we get:

e^(pi * i) = 1

Yes, we've improved the amazing.

So, while the mildly mannered geeks all celebrate today as pi day, I'll be waiting until June 28th to sit down and read "The Joy of Pi"  http://www.joyofpi.com/

Thursday, March 3, 2011

Fun with Google's Latest Visualization Tool: Ngram Viewer

Late last year Google released another free tool called 'Ngram Viewer'  http://ngrams.googlelabs.com/.  It's similar to Google Trends but we're no longer limited our search to the last decade or so.  Google may not have existed in 1800, but they're now able to show how many times a particular word or phrase was in text by year.  When your spare time hobby is to scan in every book every written you can do interesting stuff.

Want to know how many times the word 'fuel' was written between 1800 and 2008?  Try it out, you'll some interesting things.

Linked are some I found interesting:


The Future is No Longer Now, it has been replaced with the Past:
http://ngrams.googlelabs.com/graph?content=future%2Cpast&year_start=1800&year_end=2010&corpus=0&smoothing=3 


But we like to write about the years:
http://ngrams.googlelabs.com/graph?content=1800%2C+1850%2C+1900%2C+1950%2C+2000&year_start=1800&year_end=2008&corpus=0&smoothing=3


Political Parties:
http://ngrams.googlelabs.com/graph?content=republicans%2Cdemocrats%2Cwhigs&year_start=1800&year_end=2008&corpus=0&smoothing=3


Some of the presidents:
http://ngrams.googlelabs.com/graph?content=Roosevelt%2CNixon%2CReagan%2CBush%2CCarter%2CKennedy%2CLincoln%2C&year_start=1800&year_end=2008&corpus=0&smoothing=3


Academic Subjects (stolen from Ben Shneiderman):
http://ngrams.googlelabs.com/graph?content=chemistry%2Cphysics%2Cbiology&year_start=1800&year_end=2008&corpus=0&smoothing=3


Burger King IS King:
http://ngrams.googlelabs.com/graph?content=McDonalds%2CHardees%2CTaco+Bell%2C+Burger+King&year_start=1800&year_end=2000&corpus=0&smoothing=3

Monday, February 28, 2011

GirlTalk and Computer Science, what more could a guy ask for?

Yes, it's no secret I love Girl Talk.  Who doesn't love this musical magician?  I swear he's using a computer program of some sorts to identify which music to use and how to blend it.  No mortal could EVER be that good.


It's also no secret I love computers.  I love technology (more than Kip I swear) and I'd hope the pursuit of my PHD in computer science is such an indication.  This semester I am taking Information Visualization taught by Ben Shneiderman.  Thus is was as if the stars aligned when one of my buddies sent me a link that has a nice visualization of the latest Girl Talk album 'All Day':

http://mashupbreakdown.com/




Finally I can see what song is being played at what point.  A few of the items are slightly off but I think this is a nice way to view Girl Talk's implementation.  It's insane to think he occasionally mixes up to 7 songs at a time.

Insane.