Strata, Ninjas, Distributed Data Day, and Graph Day Trip Recap

This last week was a helluva set of trips, conferences to attend, topics to discuss, and projects to move forward on. This post I’ll attempt to run through the gamut of events and the graph of things that are transversing from the conference nodes onward! (See what I did there, yeah, transpiling that graph verbiage onto events and related efforts!)

Monday Flight(s)

Monday involved some flying around the country for me via United. It was supposed to be a singular flight, but hey, why not some adventures around the country for shits and giggles right! Two TIL’s (Things I Learned) that I might have known already, but repetition reinforces one’s memory.

  1. If you think you’ve bought a nonstop ticket be sure to verify that there isn’t a stopover half way through the trip. If there’s any delays or related changes your plane might be taken away, you’ll get shuffled off to who know’s what other flight, and then you end up spending the whole day flying around instead of the 6 hour flight you’re supposed to have.
  2. Twitter sentiment tends to be right, it’s good policy to avoid United, they schedule their planes and the logistical positions and crews in ways that generally become problematic quickly when there’s a mere minor delay or two.

Tuesday Strata Day Zero (Train & Workshop Day)

Tuesday rolled in and Strata kicked off with a host of activities. I rolled in to scope out our booth but overall, Tuesday was a low yield activity day. Eventually met up with the team and we rolled out for an impromptu team dinner, drinks, and further discussions. We headed off to Ninja, which if you haven’t been there it’s a worthy adventure for those brave enough. I had enough fun that I felt I should relay this info and provide a link or three so you too could go check it out.

Wednesday Strata Day One

Day two of Strata kicked off and my day involved mostly discussions with speakers, meetings, a few analyst discussions, and going around to booths to check out which technology I needed to add to my “check it out soon” list. Here are a few of the things I noted and are now on the list.

I also worked with the video team and cut some video introductions for Strata and upcoming DataStax Developer Days Announcements. DataStax Developer Days are free events coming to a range of cities. Check them out here and sign up for whichever you’re up for attending. I’m looking forward to teaching those sessions and learning from attendees about their use cases and domains in which they’re working.

The cities you’ll find us coming to soon:

I wish I could come and teach in every city but I narrowed it down to Chicago and Dallas, so if you’re in those cities, I look forward to meeting you there! Otherwise you’ll get to meet other excellent members of the team!

This evening we went to Death Ave. The food was great, drinks solid, and the name was simply straight up metal. Albeit it be a rather upper crust dining experience and no brutal metal was in sight to be seen or heard. However, I’d definitely recommend the joint, especially for groups as they have a whole room you can get if you’ve got enough people and that improves the experience over standard dining.

Thursday Strata Day Two

I scheduled my flights oddly for this day. Which in turn left me without any time to spend at Strata. But that’s the issues one runs into when things are booked back to back on opposite coasts of the country! Thus, this day involved me returning to Newark via Penn Station and flying back out to San Francisco. As some of you may know, I’m a bit of a train geek, so I took a New Jersey NEC (Northeast Corridor) train headed for Trenton out of Penn back to the airport.

The train, whether you’re taking the Acela, Metroliner, NJ Transit, or whatever is rolling along to Newark that day is the way to go in my opinion. I’ve taken the bus, which is slightly cheaper, but meh it’s an icky east coast intercity bus. The difference in price in a buck or three or something, nothing significant, and of course you can jump in an Uber, Taxi, or other transport also. Even when they can make it faster I tend to prefer the train. It’s just more comfortable, I don’t have to deal with a driver, and they’re more reliable. The turnpikes and roadways into NYC from Newark aren’t always 100%, and during rush hour don’t even expect to get to the city in a timely manner. But to each their own, but for those that might not know, beware the taxi price range of $55 base plus tolls which often will put your trip into Manhattan into the $99 or above price range. If you’re going to any other boroughs you better go ahead and take a loan out of the bank.

The trip from Newark to San Francisco was aboard United on a Boeing 757. I kid you not, regardless of airline, if you get to fly on a 757 versus a 737 or Airbus 319 or 320, it’s preferable. Especially for flights in the 2+ hour range. There is just a bit more space, the engines make less noise, the overall plane flies smoother, and the list of comforts is just a smidgen better all around. The 757 is the way to go for cross continent flights!

In San Francisco I took the standard BART route straight into the city and over to the airbnb I was staying at in Protrero Hill. Right by Farley’s on Texas Street if you know the area. I often pick the area because it’s cheap (relatively), super chill, good food nearby, not really noisy, and super close to where the Distributed Data Summit and Graph Day Conferences Venue is located.

The rest of Thursday included some pizza and a short bout of hacking some Go. Then a moderately early turn in around midnight to get rested for the next day.

Friday Distributed Data Summit

I took the short stroll down Texas Street. While walking I watched a few Caltrain Commuter Trains roll by heading into downtown San Francisco. Eventually I got to 16th and cross the rail line and found the walkway through campus to the conference venue. Walked toward the building entrance and there was my fellow DataStaxian Amanda. We chatted a bit and then I headed over to check out the schedule and our DataStax Booth.

We had a plethora of our rather interesting and fun new DataStax tshirts. I’ll be picking some up week after next during our DevRel week get together. I’ll be hauling these back up to Seattle and could prospectively get some sent out to others in the US if you’re interested. Here’s a few pictures of the tshirts.

After that joined the audience for Nate McCall’s Keynote. It was good, he put together a good parallel of life and finding and starting to work with and on Cassandra. Good kick off, and after I delved into a few other talks. Overall, all were solid, and some will even have videos posted on the DataStax Academy Youtube Account. Follow me @Adron or the @DataStaxAcademy account to get the tweets when they’re live, or alternatively just subscribe to the YouTube Channel (honestly, that’s probably the easiest way)!

After the conference wrapped up we rolled through some pretty standard awesome hanging out DevRel DataStax style. It involved the following ordered events:

  1. Happy hour at Hawthorne in San Francisco with drink tickets, some tasty light snacks, and most excellent conversation about anything and everything on the horizon for Cassandra and also a fair bit of chatter about what we’re lining up for upcoming DataStax releases!
  2. BEER over yonder at the world famous Mikeller Bar. This place is always pretty kick ass. Rock n’ Roll, seriously stout beer, more good convo and plotting to take over the universe, and an all around good time.
  3. Chinese Food in CHINA TOWN! So good! Some chow mein, curry, and a host of things. I’m a big fan of always taking a walk into Chinatown in San Francicsco and getting some eats. It’s worth it!

Alright, after that, unlike everybody else that then walked a mere two blocks to their hotel or had taken a Lyft back, I took a solid walk all the way down to the Embarcadero. Walked along for a bit until I decided I’d walked enough and boarded a T-third line train out to Dogpatch. Then walked that last 6 or so blocks up the hill to Texas Street. Twas an excellent night and a great time with everybody!

Saturday Graph Day

Do you do graph stuff? Lately I’ve started looking into Graph Database tech again since I’ll be working on and putting together some reference material and code around the DataStax Graph Database that has been built onto the Cassandra distro. I’m still, honestly kind of a newb at a lot of this but getting it figured out quickly. I do after all have a ton of things I’d like to put into and be able to query against from a graph database perspective. Lot’s of graph problems of course don’t directly correlate to a graph database being a solution, but it’s indeed part of the solution!

Overall, it was an easy day, the video team got a few more talks and I attended several myself. Again, same thing as previously mentioned subscribe to the channel on Youtube or follow me on Twitter @Adron or the crew @DataStaxAcademy to get notified when the videos are released.

Summary

It has been a whirlwind week! Exhausting but worth it. New connections made, my own network of contacts and graph of understanding on many topics has expanded. I even got a short little time in New York among all the activity to do some studying, something I always love to break away and do. I do say though, I’m looking forward to getting back to the coding, Twitch streams, and the day to day in Seattle again. Got some solid material coming together and looking forward to blogging that too, and it only gets put together when I’m on the ground at home in Seattle.

Cheers, happy thrashing code!

ML4ALL LiveStream, Talks & More

If you’re attending, or if you’re at the office or at home, you can check out the talks as they go live on the ML4ALL Youtube Channel! Right now during the conference we also have the live feed on the channel, so if you’re feeling a little FOMO this might help a little. Enjoy!

Here are a few gems that are live already!

Manuel Muro “Barriers To Accelerating The Training Of Artificial Neural Networks”

-> Introduction of Manuel

Jon Oropeza “ML Spends A Year In Burgundy”

-> Introduction of Jon

Igor Dziuba “Teach Machine To Teach: Personal Tutor For Language Learners”

-> Introduction to Igor

Barriers To Accelerating The Training Of Artificial Neural Networks – A Systemic Perspective – Meet Manuel Muro

manuel-muroThe real breakthrough for the modern Artificial Intelligence (AI) and Machine Learning (ML) technology explosions started back in 1943 when researchers McCulloch & Pitts came up with a mathematical model to represent that function of the biological neuron; nature’s gift that allows all life to operate and learn over time. Eventually this research would then give birth to the Artificial Neural Network (ANN).

Continue reading “Barriers To Accelerating The Training Of Artificial Neural Networks – A Systemic Perspective – Meet Manuel Muro”

Conducting a Data Science Contest in Your Organization w/ Ashutosh Sanzgiri

ashutosh-sanzgiriAshutosh Sanzgiri (@sanzgiri) is a Data Scientist at AppNexus, the world’s largest independent Online Advertising (Ad Tech) company. I develop algorithms for machine learning products and services that help digital publishers optimize the monetization of their inventory on the AppNexus platform.

Ashutosh has a diverse educational and career background. He’s attained a Bachelor’s degree in Engineering Physics from the Indian Institute of Technology, Mumbai, a Ph.D. in Particle Physics from Texas A&M University and he’s conducted Post-Doctoral research in Nuclear Physics at Yale University. In addition to these achievements Ashutosh also has a certificate in Computational Finance and an MBA from the Oregon Health & Sciences University.

Prior to joining AppNexus, Ashutosh has held positions in Embedded Software Development, Agile Project Management, Program Management and Technical Leadership at Tektronix, Xerox, Grass Valley and Nike.

Scaling Machine Learning (ML) at your organization means increasing ML knowledge beyond the Data Science (DS) department. Several companies have Data / ML literacy strategies in place, usually through an internal data science university or a formal training program. At AppNexus, we’ve been experimenting with different ways to expand the use of ML in our products and services and share responsibility for its evaluation. An internal contest adds a competitive element, and makes the learning process more fun. It can engage people to work on a problem that’s important to the company instead of working on generic examples (e.g. “cat vs dog” classification), and gives contestants familiarity with the tools used by the DS team.

In this talk, Ashutosh will present the experience of conducting a “Kaggle-style” internal DS contest at AppNexus. he’ll discuss our motivations for doing it and how we went about it. Then he’ll share the tools we developed to host the contest. The hope being you too will find inspiration to try something in your organization!

From Zero to Machine Learning for Everyone w/ Poul Peterson

poul-petersenBigML was founded in January 2011 in Corvallis, Oregon with the mission of making Machine Learning beautifully simple for everyone. We pioneered Machine Learning as a Service (MLaaS), creating our platform that effectively lowers the barriers of entry to help organizations of all industries and sizes to adopt Machine Learning.

As a local company with a mission in complete alignment with that of the conference, BigML would be delighted to partake in this first edition of ML4ALL.

“So you’ve heard of Machine Learning and are eager to make data driven decisions, but don’t know where to start? The first step is not to read all the latest and greatest research in artificial intelligence, but rather to focus on the data you have and the decisions you want to make. Fortunately, this is easier than ever because platforms like BigML have commoditized Machine Learning, providing a consistent abstraction making it simple to solve use cases across industries, no Ph.D.s required.

As a practical, jump-start into Machine Learning, Poul Petersen, CIO of BigML, will demonstrate how to build a housing recommender system. In just 30 minutes, he will cover a blend of foundational Machine Learning techniques like classification, anomaly detection, clustering, association discovery and topic modeling to create an end-to-end predictive application. More importantly, using the availability of an API will make it easy to put this model into production, on stage, complete with a voice interface and a GUI. Learn Machine Learning and find a great home – all without paying expensive experts!”

Poul Petersen (@pejpgrep) is the Chief Infrastructure Officer at BigML. An Oregon native, he has an MS degree in Mathematics as well as BS degrees in Mathematics, Physics and Engineering Physics from Oregon State University. With 20 plus years of experience building scalable and fault-tolerant systems in data centers, Poul currently enjoys the benefits of programmatic infrastructure, hacking in python to run BigML with only a laptop and a cloud.

 

Your First NLP Machine Learning Project: Perks and Pitfalls of Unstructured Data w/ Anna Widiger

anna-widigerAnna Widiger (@widiger_anna) has a B.A. degree in Computational Linguistics from University of Tübingen. She’s been doing NLP since her very first programming assignment, specializing in Russian morphology, German syntax, cross-lingual named entity recognition, topic modeling, and grammatical error detection.

Anna describes “Your First NLP Machine Learning Project: Perks and Pitfalls of Unstructured Data” to us. Faced with words instead of numbers, many data scientists prefer to feed words straight from csv files into lists without filtering or transformation, but there is a better way! Text normalization improves the quality of your data for future analysis and increases the accuracy of your machine learning model.

Which text preprocessing steps are necessary and which ones are “nice-to-have” depends on the source of your data and the information you want to extract from it. It’s important to know what goes into the bag of words and what metrics are useful to compare word frequencies in documents. In this hands-on talk, I will show some do’s and don’ts for processing tweets, Yelp reviews, and multilingual news articles using spaCy.

Jump or Not to Jump: Solving Flappy Bird with Deep Reinforcement Learning w/ Kaleo Ha’o

kaleo-haoThe holy grail of machine learning has always been a “Skynet” type of artificial intelligence that could generally learn anything (artificial general intelligence), and the biggest advancement toward achieving such A.I. is deep reinforcement learning (DRL). The reason for this is DRL’s ability to solve a vast array of problems; its application ranges from learning any video game, to driving autonomous cars, to landing Space-X Falcon 9 rockets. As such, DRL is one of the most exciting fields in machine learning, and will continue to be so for the foreseeable future.

This talk will be a deep dive into the math behind an algorithm which uses DRL to solve the video game Flappy Bird. Additionally the goal of this talk will be such that even the most math-phobic beginner-to-ML will walk away excited about DRL and able to verbalize the central equation to DRL. This will be done in two simple steps. First we will derive the Bellman Equation (classic reinforcement learning), which provides a framework for the algorithm’s decision making process. Second we will use that Bellman Equation to derive a custom loss function, which will drive the training of the algorithm’s deep neural network (deep learning).

To keep things simple during presenting, Kaleo will skip the code snippets but links will be provided where everybody can download the Flappy Bird algorithm’s code, a 20 page paper (written by Kaleo) detailing the algorithm, and a list of resources for further study.

The following is a link to the algorithm code and accompanying paper (PDF), where the math details can be found in section Algorithms and Techniques: https://github.com/06kahao/Improved-Q-Learning-Multi-Environment

Kaleo Ha’o (@06kahao) is a graduate from Udacity’s Machine Learning Engineering program, Kaleo Ha’o is a freelance ML engineer who loves to contribute to Open A.I. research on reinforcement learning.