Yesterday I recapped Day 1 of my experience at CXL. Today I’m covering the rest of the conference.
Day 2 – Optimization Day
I met a lot of people on Day 1’s evening festivities that were excited about Day 2’s optimization and testing focus. Personally, I was most excited about Day 3 and figured I would appreciate the information I got from the other days. The reality is… so much relies on testing. Figuring out what is successful today relies on testing and failing yesterday – so optimization should be a focus for everyone in marketing.
“A mathematician’s guide to growth optimization” – Candace Ohm
Candace has a Ph.D. in Game Theory and approaches marketing from a holistic perspective, bringing advanced mathematics to interpret marketing. To her credit, she managed to pull off the presentation with an audience that was a little tired from the night before and she was the opening speaker! I joked with another conference attendee that it is a little rough to be the opening act, after the previous night’s festivities, and have your talk be focused on applied math.
The audience warmed up to her right away, and Candace left most of us with our mouths wide open at the possibilities of tying users to exponential value. A great quote from Candace was, “Kiss your attribution model goodbye,” to which I was ready to put mine down and punt it through the field goal poles.
Credit – Giphy: https://media.giphy.com/media/6UMpXKgsu0JwI/giphy.gif
Similar to Tara’s graphs of the retention curve, Candace used a graph and broke growth down into three phases: Acquisition (new clients), Activation (onboarding phase), and Retention (continued value). Candace uses this graph to showcase how different types of users actually correspond to patterns of usage, where the new/onboarding users interact differently than the users who are in the retention phase. Candace gave us a nifty equation we could use to compute our own Growth Curve: User Demand * Value per User = Growth Curve.
Candace suggests that we aim for creating interactions with customers that produce added value to the entire customer base so we can tap into exponential growth. Her talk references the life cycle of a customer, and she points out that for most companies, the Pareto principle (80/20 phenomenon) will likely turn up.
“80% of the value comes from 20% of your customers”
– Candace Ohm
My question was… ‘How do I get exponential growth out of our customers?’ Using some non-laymen’s math, Candace uses two mathematical laws to explain what exponential growth looks like in a network of customers that a business might have; Reed’s Law – the value of an added user to a network, and Metcalfe’s Law – which focuses on the growth of sub-groups that exist within a given network.
For the non-experts in Ph.D. math (probably the majority), Candace put it simply for us:
- Harvest/Focus on demand from the users that add value to other users.
- Optimize your funnel first for retention by creating value for clients you already have.
Out of all of the speakers, few acknowledged their defeats, failures, or mistakes publicly, but in this capacity, Candace stood out by taking a “failure bow” where she was misled by early attempts of prediction and data. She reiterates what most marketers live by, failure is necessary.
Recap of Key Learnings
- The three phases of the growth curve.
- Aim for creating interactions with customers that produce added value to all customers.
- In marketing, failure is necessary to learn.
“Demystifying AI in Marketing” – Guy Yalif
Guy Yalif hails from Intellimize, a predictive conversion and campaign management company. Guy tackles the complex and booming trend in marketing of adding Artificial Intelligence to marketing teams. Before he started, I really hoped he would give me more clarity on the applications of AI and he did not disappoint.
Kicking off his talk, Guy defined AI for marketing as – something doing something we could consider as ‘intelligent.’ Guy jokes that even though it seems like marketing will be replaced by artificial intelligence, it will likely only make leaps in bounds in specific areas of marketing.
Guy breaks down what AI for marketing is effective at:
- Managing a lot of tasks.
- Accelerated learning.
- Acting w/precision.
- Listening + reacting.
Then he breaks down what AI for marketing is not effective at:
- Producing something from nothing (creative)
- Empathy (emotion)
- Understanding and contextualizing (storytelling)
Guy then points out that for conversion rate optimization you need to give artificial intelligence a specific task – and if you do, it can open the door up for opportunities for you. However, it likely will not be able to show you those opportunities without you looking between the lines.
Guy points out a handy step-by-step checklist of where the machine learning within artificial intelligence is strongest: when the data quality is high, the training model is highly purposeful and task driven, and when the predictions it is allowed to make are regularly updated. Using this simple checklist, you can create a machine learning model for your data.
Another rule of thumb Guy handed us was based on the number of ideas that we have, and want to test for optimization. Ideas for how to improve, target, or create value can be plentiful or dried out.
If there are lots of ideas – Machine learning application of AI can thrive and be useful.
If there are just a few ideas – AI might be overkill, consider a rules-based variate approach.
If there are minimal ideas – Just use A/B testing.
During the Q&A, Guy pointed out that if you do go the route of adding AI to your stack and are planning to leverage machine learning, which you should do so with a measured approach and let the technology do the work… even 24/7 if need be.
Recap of Key Learnings
- The definition of Artificial Intelligence for marketing applications.
- What AI is effective at and not effective at doing.
- Where machine learning within AI is strongest, based on the number of ideas you have.
“The Statistical Pitfalls of A/B Testing” – Chad Sanderson
By the time Chad’s talk came around, I was ready for more complex math, “Has anyone else said that ever?” I was excited to learn some statistical traps from Chad Sanderson because only a few weeks before I had finished reading Thinking, Fast and Slow by Daniel Kahneman and felt prepared to put my statistics hat back on.
Chad Sanderson’s talk is primarily concerned with interpreting the results of marketing testing: A/B testing and Multivariate testing. Assuming you are familiar with statistics and the scientific method, you evaluate your H0 hypothesis (Ex: Changing a button text will increase clicks) based on the results.
If you reject the result (Ex: Changing the button text did not increase clicks) but in fact the result was correct, so you incorrectly rejected it, then you have a Type I error. So, your H0 hypothesis was correct, but you ignored it or misinterpreted the data, and therefore you concluded incorrectly.
On the other hand, if you accept the result (Ex: Changing the button text did increase clicks) but in fact the result was incorrect, so you incorrectly accepted it, then you have a Type II error. So, your H0 hypothesis was incorrect, but you ignored it or misinterpreted the data and concluded incorrectly.
Chad used the errors to point out that you not only need to protect yourself from those common statistical errors but you need to ensure that you are reducing your overall chance for errors. Chad calls it, “Error Protection,” and points to P-Values as a decent indicator for setting up tests.
Chad defines the P-Value as the probability of an event being observed and happening under normal circumstances. As a rule of thumb, not every test is under normal circumstances and often just observing a test will change its outcomes. This is why Chad suggests that a test be run over a long period of time and repeated at some frequency because things can change.
Even with multiple tests, Chad points out that the distribution of P-Values under a Null Hypothesis is random, suggesting that for marketers this will result in, “Finding results that are actually not there.” Basically you’d do a test, get a result, see a low P-Value and conclude the result was unsuccessful when in fact it was not (Type I errors).
So how can marketers construct quality hypothesis testing and get valid results free of statistical errors?
Chad Sanderson’s calls this problem, “Alpha Inflation” and in order to reduce these family-wise error rates he recommends:
- Bonferroni Correction – using adjusted confidence levels that all reach the desired (add up to) alpha for your significance (90, 95, 99, etc.).
- Dunn-Sidak Correction – where you reject the null hypothesis that is under the adjusted-level of the alpha of your chosen significance.
- Holm-Hochberg step-procedure – similar to Holm’s updated Bonferroni method of correction for partition testing, ignoring somewhat the largest order statistic but applying critical values based on Sime’s inequality (adding some assumptions) but relies on join distributions of data.
- Benjamini-Hochberg-Yekutieli procedure – builds upon the false discovery rate and estimates how valid a test is under different types of dependence, and as the number of tests increases the resulting error rates should become reduced.
Did your head just explode? Did you black out? Maybe this will help; I’ve deconstructed Chad’s basic checklist for creating a proper question for a hypothesis:
“How much ____ could I expect to see on ___ given ____ traffic by the end of the ____?”
“How much lift could I expect to see on button given 68,500 traffic by the end of the month?”
For the question, “How many tests should I run?” Chad answered that we should use the basic formula for N tests: 1 – (1 – P)N. Chad added at the end that he was not a Bayesian because the error rates lack context and recommend that we run tests continually but only to create context for specific questions we have.
Recap of Key Learnings
- Get nerdy and learn about family-wise error rate corrections and procedures.
- Apply the basic hypothesis question checklist to your tests.
- Run enough tests for your desired level of significance based on the formula.
“Without Research There is Nothing” – Els Aerts
I got to thank Els for her talk afterward, but it wasn’t until the days and weeks following that I realized how many other people enjoyed her talk. The audience had just been trying to recover from a math, AI, and statistics deep-dive when Els took the stage. What we (the audience) really needed was something light, down to earth, and humorous, to which Els totally nailed it.
For someone so humble, Els is a firecracker. Take her binary title for her talk, and you can get a taste of the flavor of the way she gave her talk. The audience was howling with laughter at one point when her mic went out, and she commented about the “Good looking mic technician.”
Els’ talk centered on the need for marketing research but really honed in on what makes research useful for marketers. Els breaks market research down into three parts: User Research, Usability, and Conversion Optimization. She answers the question, “Why do Research?” with the question, “Don’t you want to know what works best for your products or services?”
Yes, but how much research? Els suggests that we facilitate research that comes just in time for impactful decisions to be made and just enough to get the right information that is needed. I feel like she could call this Agile Research based on how she defines it.
Assuming I was going to start researching tomorrow, what should I do first? Els again covers this by saying you should, “Research the proper stuff, User Research.” She points out there are two types of User Research where you are trying to find out about something tied to a user:
- Quantitative (The what, numeric)
- Qualitative (The why, words/sentences)
So which one should I use? Both, according to Els, a lot of companies and marketers rely too much on the Quantitative research from their data. There are also times when certain types of testing shine and when it does not. For example, focus groups are bad for usability testing.
Here are Els Aerts’ three key testing takeaways:
- Test with the right people (Relevant audience)
- Write a good testing scenario (A simple task perhaps)
- Be a good moderator of the test (This could be a whole talk in itself)
Sometimes a talk stands out because you learn a lot, other times because they were entertaining, but Els talk managed to do both and to fill my journal with lots of notes and scribbles.
Recap of Key Learnings
- Understand why you should do research and then do it.
- Facilitate research that leads to action.
- Start and end with User Research if you are lost, and follow the testing takeaways.
“How to Optimize Big Corporates (with a Lot of Legacy)” – Matt Roach
Matt Roach changed up the day’s topic by focusing on organization as a means to bring about change, testing, and success for marketing. Matt’s talk felt like a guide for marketing leaders to pick and find the best candidates for their teams. I met Matt in the hall the next day and asked him some direct questions about what his advice would be for someone wanting to effectively change something at an executive level from many levels below it…
Matt’s advice? First, he empathized with me and said that changing executive culture is similar to being at war and that I should start with proving my case or idea on a small scale and slowly scale it up. Finally, he suggested that we all ask ourselves, “What is our degree of impact at our company?” Matt said that he concluded that he was not having the impact he desired so he decided to go to a company that could give him a path to finding it.
Matt’s talk at CXL described a standardized framework for optimizing organizing for success. When asked why to standardize a framework for organizational optimization Matt broke it down into bullets:
- Completeness (working with goals and checklists helps you stay on track).
- Order matters.
- Efficiency and improving performance is always important.
- Don’t lose the focus if staff go away.
- Explaining yourself (if asked what you’ve been doing, to stay on task).
- A process for refinement.
Matt even has a value formula for how to determine what value teams and employees can bring to an organization (I think some HR people took a long hard look at this):
Value = (Knowledge + Process) * Skills * Attitude
It’s important to note that Matt concludes that knowledge and process are the “what to do” based on the role and the skills and attitude are the “execution” ability of the person in that role. Matt also adds that attitude is very important but is not something you can change – so pick this wisely. Culture is the environment where the value formula operates but that is its extent.
Here are Matt’s 7 Drivers for Organizational Success:
- Use a standardized process customized to your situation.
- Layout the budget and resources you have at your disposal.
- Visualize your organizational structure and hierarchy.
- Identify the skills within your organization.
- Identify the attitudes within your organization.
- Check your culture for alignment at every role.
- The CEO
I wanted to make a special note about the last driver for organizational success, the CEO. Matt pointed out that the reason he found the most success at one of his recent companies was because his CEO was an enthusiast. Since his CEO understood and believed in the capabilities of optimization Matt was able to grow the company. Matt said it was easier to make an impact and prove yourself if you have the right CEO.
Recap of Key Learnings
- Prove yourself on a small scale.
- Determine your degree of impact; are you on the right path?
- Try your hand at the value formula.
- Use the standardized framework to get a better idea of the drivers of your organization’s success – hint; it’s likely not your culture.
“Lessons Learnt from Creating a Large Scale Experimentation Culture” – Mats Einarsen
Mats talk at CXL bridged for me a lot of ideas from Matt Roach’s talk I had heard just before. Mats had a lot of great quotes that I wrote down and a drawing of his systematic testing chart that I crudely drew into my journal.
First, Mats grounded us, especially me, when he said that when it comes to testing, data, and the presentation of results we need to assume that people (executives/decision makers) do not get it and work backward from there.
Pulling from Ronald Reagan, Mats positioned testing in an organization as follows, “Trust, but verify.” From this very important quote, most of Mats entire talk rests – that we need to test, but the results should always be under scrutiny and verification. As we learned earlier from our P-testing, we can easily fall into statistical traps.
Mats showed us the systematic testing chart with locations on each row and columns of data counting the number of tests performed on specific elements. Using this type of layout an optimization marketer could keep track of their hypothesis tests, how many, and color them by the outcomes.
Mats warns us that just because a test did not succeed we do not need to count it as a sunk cost or a wasted test. Everything we test helps us learn and should be treated as such.
Recap of Key Learnings
- Assume that they do not get it.
- Trust but verify your results.
- Try your hand at a systematic testing chart to keep track of your tests over time.
“Scaling experimentation at Airbnb: Platform, Process, and People” – Yu Guo
Yu Guo hails from Airbnb, the residence sharing cloud tool for finding places to stay. Yu is a data science manager and gave me one of the best takeaways from the entire CXL event. Yu’s team has a knowledge repository for all of their tests – that’s genius!
Yu says there are three steps that fuel growth for a company:
- The platform for growth within a company.
- The processes used by teams within a company for growth.
- The people who follow those processes at the company to achieve growth.
On top of all three of those steps sits the knowledge repo: a place for experiments and results that teams and leaders across an organization can view and learn about the types of trials that are occurring in order to strive after growth.
The platform within a company can be likened to the MVP of an organization and the condusive elements within that promote growth (technology, culture, and incentives). The processes are the directives and schedule of prioritized tasks by teams where growth will be sought. Finally, the people are the roles that are motivated to seek after and conduct tests in order to optimize.
Yu points out how crucial it is to have growth that comes from your customers. Since customers and trends come about organically and change radically, companies need a process for continued improvement.
Airbnb is not a small company and has grown quickly in the last several years. If you want to grow and become more customer-centric, I would recommend focusing on improving the three steps of growth Yu provides.
Recap of Key Learnings
- Do you have a knowledge repository for your testing?
- Use the three macro steps that fuel growth to compare to your company.
- Are you customer-centric?
“How to Win at B2B Optimization” – Renee Thompson
Renee Thompson goes into business to business lead acquisition in her talk at CXL by analyzing tactics per type of sale. Renee points out some of the types of sales that businesses are dealing with. Businesses have to deal with the sale type of their customers based on the amount of time it takes to sell the product or service.
First is the offline sale, which has a complicated pricing structure based on the fluctuation of prices and negotiated final prices.
Second is the long cycle sale, at over 6 months from start to finish.
Third is based on a buyer’s buying team sales cycle, where a group within a company is assigned to vet and buy something.
Fourth and last on Renee’s list is the High-Risk Aversion sales cycle, where there is a natural tendency to be highly risk-averse at all stages of the sales process.
After writing them down, I had the distinct impression that our clients fall into pretty much all of these categories. According to Renee, the different types of sales require a unique type of marketing in order to capitalize on the customer’s pain with targeted solutions.
Renee gave some great pointers when it came to tracking the progress of lead generation for B2B businesses. The most important one was that velocity really matters when it comes to sales. By breaking up the types of customers into sales types, you could track the progress of how quickly customers were moving into and out of the pipeline.
Renee then asked the question, “What metrics matter?” In response, Renee highlights some obvious and not so obvious ideas:
- Leads (obvious), but the focus should be lead quality (not so obvious).
- Engagement of content, the depth of content, and the drop-off points for content.
- Audience qualifying factors such as demographics/firmographics/techno-graphics/etc.
- Attribution modeling that aims at increasing downstream activity.
Recap of Key Learnings
- What types of sales cycles exist for your business?
- How can you capitalize on the sales types?
- What metrics matter for your business? Think velocity.
“E-Commerce and Customer Experience Optimization Practices from Dell.com” – Vab Dwivedi
Vab Dwivedi had an approach from Dell that I fell in love with and brought back to my team. His approach was, “Hold yourselves accountable always, but remember to evolve cyclically.” If we have a solution or disagree with a current solution, the team should always be thinking, “Let’s run a test!” However, it should always hold the team accountable that tests are guided toward aligned growth with the team, department, and company macros.
According to Vab, Dell has a market that has some price adjustments on its products, but for the most part, they do not change that often. This made it so that Dell could test out hyper-personalized 1 to 1 types of testing without over-complicated setups. If you have a lot of price change in your products or services, then he says 1 to 1 marketing might be a complicated endeavor.
Vab subscribes to the same type of philosophy as a lot of marketers, KISS (Keep it Simple Stupid). He mentioned over and over again that testing and optimization does not need to be overly complex and if it is, maybe try to find a simpler test.
The best part about Vab’s talk was when he shared Dell’s tenets, as they are impactful takeaways that you should check out:
- Be customer obsessed (customer-centric).
- Act as subject matter experts.
- Deliver advanced analytics that are consumable (huge personal takeaway here for me).
- Data should guide decision making.
- You should serve as a channel of a specific type of knowledge.
- Be agile, push innovation that creates continued value.
Finally, Vab said that his team discovered a lot of their success came from eliminating errors and issues in the customer’s experience during their time on their site. They had cart issues, link issues, and other errors that ruined tests completely. Vab’s advice: focus on eliminating errors if you are looking to test.
Recap of Key Learnings
- Hold yourself accountable to growth.
- Keep your testing and optimization simple, focus on error elimination.
- What tenets does your team believe in and follow?
“May the Best Ideas Win (They Usually Do)” – Merritt Aho
If I had to summarize Merritt Aho’s talk from CXL, I would have to say that it’s all about fostering innovation by removing the barriers to creativity. Often creativity does not have a place in marketing teams (as ironic as that sounds) because there is some tried and true method for doing something and no deviation is allowed.
I think best practices are very important to follow – so is tradition – but it is impractical to assume that nothing should ever change. Merritt’s strategy is to use your company and the intelligence of your organization in order to leverage the signals for change.
Merritt wants you to take innovation seriously and give ideas a chance by creating a process for ideation. He suggests that ideas that work are those that can come from anywhere but give you a path to win. Similar to other talks, Merritt uses a process for optimization that starts with research and ends with execution so I will not focus on those parts of his talk – they are basics to testing.
The meat of Merritt’s talk came out of the meeting template he shared for how to get great ideas out of your own organization. He says, “Smart ideas from your own skilled staff means reduced risk for a company.” I agree because your staff is often close enough to the flames to have some of the best ideas.
The template is designed to eliminate availability bias and get to fresh ideas:
- Gather 3-5 people across disciplines at your organization.
- Set the mindset of the group so that they know it is a think tank. You want quantity > quality of ideas (Merritt calls it a brain dump).
- Openness; create a safe space free of critiquing, no fighting/debate, move through ideas and do not settle on anything yet.
- Narrow the topics of discussion to a single topic or problem you want ideas for.
- After ideas settle down, pick one (can be a vote) and challenge the group to build on the idea. Consider at this point to draw it, sketch it, or diagram how it works.
- Finally, thank everyone and take what you learn back to your desk and take time to think about it. Did the ideas help you, take you down a different course, or do nothing?
The template of an ideation meeting should give you answers to the problem you had. If it did not, consider a different approach or rethinking the question. Merritt suggestions that ideas should come from multiple sources in an organization, he calls it the “End of the Lone Wolf,” era.
Recap of Key Learnings
- Do you have problems that require innovating solutions?
- What are the sources for ideas in your organization?
- Consider the template for successful innovation to eliminate availability bias.
Day 3 – Analytics Day
Enter Day 3, a shorter day, but a day focused on analytics. What could be better?
“Extending User Experience Analytics into the Real (non-digital) World” – Gary Angel
No talk pulled the carpet out from under me quite like Gary Angel’s talk at CXL. I had never heard of in-store analytics at the level that Gary’s company is taking the experience. I couldn’t help but hunt him down after he gave his talk and ask him about the application to theme parks, zoo’s, and adding 3D to the mix.
Gary’s talk centers on physical stores that have their marketplace in a physical and tangible space. Think brick and mortar stores when I say physical stores. Gary mentions that the internet has all of these amazing analytics, but analytics and optimization are solely lacking in the physical store space. So… Gary took his experience in analytics and applied it to physical store analytics.
Gary’s company focuses on providing analytics to companies who have physical store experiences. He aims to give physical stores omnichannel experiences for all of their channels by tracking and measuring the engagement and activities of customers in a store.
Every few minutes during his talk, Gary brings up the importance and value that analytics can have for a business. He shows the audience a sports stadium and talks about how he can track movement and patterns on each level. Then Gary shows us a Gap store or other physical store and the layout of that store is mapped digitally and he can make heat maps based on customer’s movements in the store. The feeling was, “Whoa! That’s amazing!”
To anyone with a physical store, check out what Gary has to offer.
Recap of Key Learnings
- Physical stores lack omnichannel tracking and analytics.
- Analytics can have tremendous value for physical stores.
- There are solutions, check out Gary Angel and Digital Mortar if you have physical storefronts.
“The Pursuit of Customer Happiness: Why Customer Experience Across Devices Matters” – Moe Kiss
The line to talk to Moe Kiss was super long after Day 3 ended at the lunch break. People were standing around for about an hour hoping to talk to her before she left. Moe works at The Iconic, a fashion company out of Australia. I have a hard time picking a favorite talk, but Moe’s talk was so applicable to me that I didn’t have any room left in my conference journal for additional notes.
For anyone familiar with advanced analytics, The Iconic uses snowplow and some other tools to effectively stitch users across platforms, devices, and browsers into a single identifiable profile. This costly and incredibly complex process involves a lot of steps but gives you a single person where you might have seen multiple users before. The unfortunate part is that it still relies on cookies, something I asked Moe about, but she points out that her users are forced to login, so it’s less of an issue for her.
Moe comes in soft but leaves with a bang. Consider this statistic she provided, “Happy customers are worth 40%+ more than your standard customer.” This should make everyone go… how do I get happier customers? I can imagine Moe saying, “I’m glad you asked!”.
So how many of your customers have just one device? How many of your customers are shopping for your products or services from just one device? Hilariously Moe answers this, “No such thing.” Journeys are now more and more across multiple devices. The result of which is that you can no longer 100% trust the analytics that only measures one of those devices.
Maybe you aren’t so convinced… so then I would ask, what does customer value mean to you at your business? Is it:
- High engagement? (Yes)
- Advocacy/Referrals? (Yes)
- Loyalty, repeat business, upsells, etc.? (Yes)
All of those are trackable, and all of those are cross-device and platform. So start optimizing with the way people think. Moe recommends we also change the way we optimize to match the way we think. Since journeys are not linear anymore, we need a model that embraces the variations.
Moe quotes Dr. Peter Fader here, “Embrace the ‘randomness of customer behavior.” There are a lot of interactions that occur between a business and its customers, so it is important to map and understand that flow. Ultimately if you understand the flow, you can understand and optimize for the customer experience.
The higher your confidence level is with identifying individual users, the more you will trust your data and make decisions based on that data. If you trust your data, you can challenge data silos, effectively move the needle in your company’s decision-making process, and you can fix user bottlenecks.
Moe is not a fan of a lot of metrics used to incentivize but instead prefers metrics for diagnostics. So use your data to the best of your abilities – and if you are pursuing customer happiness then start with unifying your users. You can catch Moe Kiss on her podcast @analyticsshow.
Recap of Key Learnings
- Do you have user duplication issues?
- Happy Customers are worth 40%+ more.
- Start optimizing with the way people think.
- Embrace the random, non-linear, multi-device journey.
“Building an Optimization Framework driven by the Cloud and AI” – Rachel Sweeney
Rachel Sweeney hails from the Google Cloud Product team, and her talk at CXL was focused on a case study of her team building an application for Machine Learning. A huge takeaway for me from her talk came in the wisdom she shared about beginning a machine learning process, “Get a Data Scientist.” Yes, that’s right marketers, business experts, etc.… you probably need a data scientist before you wade into the waters of machine learning.
Once you’ve got a Data Scientist, you need to have a pow-wow with that Data Scientist, so you both are clear and open about what questions you are looking to solve with Machine Learning. Rachel puts the question into a formulaic expression:
You need a TASK (something), with measurable EXPERIENCES (data) that relates to a PERFORMANCE (goal).
Next, Rachel says you need to decide on a project area: what task you will use, what you will measure, and the data scientist can help you construct your question for why you even need to use machine learning to accomplish it. Hint: similar to the AI talk in Day 2, it’s when rules-based is not sufficient.
At this point, Rachel’s steps for machine learning point out the pre-requisites for a successful project. You need tidy, standardized, and accessible data for machine learning to be successful.
Once you and your data scientist are ready (yes that’s right), you can create a prototype for the project flow to make sure you have accounted for everything. Stress test your idea and then deploy your project.
Rachel suggests that you can look to examples of current tools for ideas of machine learning capable tasks. In the end, report, iterate, and keep testing your project.
Recap of Key Learnings
- Get a Data Scientist to help you if you are considering Machine Learning.
- Build a question around the formulaic expression Rachel provides.
- Report and iterate your project for continued success.
Should you go to marketing conferences like CXL for learning? Yes, absolutely. There are conferences for almost every specialty of marketing. If you are willing to learn from experts in the industry that you can apply to your business, then I’d ask, what are you waiting for?
CXL was an amazing conference, and I hope I get to attend next year. I think the conference shined for teams that are bigger than teams of 1, who can focus on growth and optimization, and are willing to build their companies around strategic ideas. The theme from this conference to me was “customer-centricity,” and I think it’s been a tide that is going to sweep all of the product-oriented companies out of the marketplace in the next decade if they don’t change their game.
P.S. BBQ in Austin is Amazing.