Different Channel, Different Video, Similar Patterns

Can This Go Live? #3

2025 July 5

Disclaimer: The following is an educational analysis based on the author's interpretation of publicly available data using the framework outlined in this post. It constitutes the author's opinion and is not a definitive assertion of fact.

What is this blogpost about?

This is another addition to the blogpost series "Can This Go Live?". The first blogpost outlining the series' focus and the intended audience is here.


What did I stumble upon?

About the video
About my interest in the video

One of my friends is incredibly funny. In 2010, in the midst of his jibber jabber of then contrarian takes such as "Modern Family is a funnier show than Friends" and "Dwyane Wade and Lebron James won't win a championship, two alpha dogs can't coexist as one"1, my friend began posting frequently about Conan O'Brien. His Facebook wall was filled with "I'm with Coco" pictures and quotes. Because I found my friend incredibly funny and he was showing immense support for Conan O'Brien's comedic works, I started consuming Conan's bits. After that, I've been a big fan of Conan's work, and I listen to his latest bits on his YouTube channel and other channels that host him2.

About how I stumbled upon the examples

After publishing the first two blogposts in this series, I continued doing my routine scrolling of the comments section by sorting comments using the “Newest first” option. While watching the latest video by Team Coco, I noticed patterns similar to those previously described. At the bottom of the section, I came across 2 accounts. For the remainder of this blogpost, these 2 accounts will be referred to as “G1 accounts”. The accounts posted comments similar to the posted comments by accounts written about in the "Can This Go Live? #2" blogpost. The posted comments (i) had the sentiment of gratefulness for the creator's work, (ii) had a similar content structure of 2 sentences followed by 3 emojis, (iii) had gibberish emoji sequences (e.g. 💖🍁✨ and 🍫🌹📸). See Figure 1.

Figure 1

Figure 1: Comments with some recognizable patterns.


Continuing the comparison between the activities/behaviors in this current Team Coco video and the activities/behaviors in the previous Emergency Awesome video, one account in this current video and another account in the previous video have the same account profile picture3. See Figures 2 and 3 and Table 1.

Figure 2

Figure 2: The account profile of a comment poster on the current video by Team Coco.


Figure 3

Figure 3: The account profile of a comment poster on the previous video by Emergency Awesome.


Table 1: A comparison of 2 identical accounts between 2 different videos.
Team Coco video Emergency Awesome video
Account figure Figure 2 Figure 3
Account profile picture4 P P
Account name5 A B
Account username6 A-a1b2c A-d3e4f
Account description Links to 2 accounts: X1, X2. Links to 2 accounts: X1, X2.
Creation date 2025 July 3 2025 July 3

Based on Table 1, the accounts may be perceived as "near-duplicate" accounts because of:


Summary of the seemingly problematic activities/behaviors

Based on the exploration of publicly available information, there is reason to believe that there is a group of coordinated, inauthentic, near-duplicate accounts acting in such a way as to (1) direct users to collection of on-platform accounts and then (2) once more direct users to off-platform products/services (e.g. URLs hosted in "beacons.ai"7). A hypothesis for the group's incentive to do this is that the group aims to increase traffic to malicious digital products/services while minimizing detection workflows by using (1) the multi-layered redirection, and (2) a group of newly created, near-duplicate accounts with similar comment posting behaviors. For a more comprehensive outline of what the group of coordinated, inauthentic, near-duplicate accounts do, refer to the process outline in the summary section of the "Can This Go Live? #2" blogpost.


What are some thoughts after the walkthrough above?

Since the previous blogpost, I looked through YouTube's Creator Insider YouTube channel to find videos which spoke about spam, because I believe that folks at YouTube are likely aware of and working to resolve the problematic patterns. This video from July 2021 shares some information on mitigation efforts focusing on comments spam. Based on my understanding of the video, comments spam are a longstanding and evolving issue, and mitigation efforts will bring the spam asymptotically closer to 0. Also, this video from April 2023 shares some information on the community guidelines policy development and (program-drive and person-driven) enforcement process. Based on my understanding of the video, spam seen in production could be due to (1) an active detection workflow needs more time to take effect, (2) an active detection workflow made a debatable judgment call on a nuanced edge case, or (3) detection workflow needs enhancement or needs to be created.

After watching the videos, these are the questions I have in mind:

With regards to the scale of this issue
With regards to the impact of this issue
With regards to mitigation of this issue
Disclosures
  1. I worked in the Trust & Safety for 7 years. In 0 of those years, I worked on the YouTube UGVC platform written about in this blogpost.
  2. I interview for open roles and positions, occasionally, with some of the companies written about in my blogposts and/or their parent companies.
  3. I own equity in some of the companies written about in my blogposts and/or their parent companies.
Footnotes
  1. In 4 seasons as teammates in the Miami Heat, 2 of 4 seasons resulted in a championship. Ultimately, the partnership was a success. However, much has been written about how the pair won those 2 championships when a clear bifurcation of roles arose between the two incredible players. Based on this, my friend's take was contrarian at the time.
  2. One day, I hope to learn a new language or eat fast food with Conan O'Brien.
  3. "The same" is based on a person's judgment by doing a visual comparison by using my own perception. A second investigation type test that can be done is to do a Google Search using the search by image functionality. A third investigation type test that can be done is to use open-source and/or free picture analyzer APIs such as the Python programming module's SciKit Image module. Although, given that such APIs return a score, it is recommended that one learns what the score ranges mean, and also use 2 or more APIs if the approach was to be scaled.
  4. "P" is intended to reflect that the photos of the two accounts look the same.
  5. Name is an obfuscation of a person's name. Therefore, in the unlikely event that these accounts are two uncoordinated, authentic accounts, then the obfuscation minimizes the risk of harassment and/or doxxing. One note to add, "A" and "B" are intended to reflect different values in actual names e.g. "A" == "Joe" and "B" == "Zhou".
  6. Username is an obfuscation of a person's name. Therefore, in the unlikely event that these accounts are two uncoordinated, authentic accounts, then the obfuscation minimizes the risk of harassment and/or doxxing. One note to add, the "A" in both usernames are intended to match the "A" in the "Account name" row.
  7. While the beacons.ai domain is not inherently malicious, the specific outbound, off-platform links may be malicious given the limited available knowledge in search engines and LLMs about those links.
  8. YouTube creators have control of comment settings. See here.