Tag: social media

  • Bluesky Boosts Moderation and Enforcement Efforts

    Bluesky Boosts Moderation and Enforcement Efforts

    Bluesky Intensifies Content Moderation Policies

    Bluesky is taking a more assertive stance on content moderation and enforcement aiming to create a safer and more positive user experience. They are actively refining their strategies to address harmful content and policy violations effectively.

    Enhancing Moderation Techniques

    Bluesky has rolled out more advanced automated tooling to flag content that likely violates community guidelines spam harassment etc. These flags are then reviewed by human moderators.

    For high-certainty violations e.g. spam or fraudulent accounts the detection moderation process is being sped up in some cases to seconds for automated detection to reduce harm.

    Ozone Open Source Moderation Custom Filters Labelers

    Bluesky released Ozone an open-source moderation tool that lets users or third-party developers build and run their own moderation labeling services. Users can then subscribe to these services called labelers to apply extra filters labels or suppression of certain kinds of content.

    Examples a labeler might block or hide images of spiders filter out certain types of posts or hide content that doesn’t meet certain user preferences.

    Anti-Harassment Spam Bot Detection

    Techniques to detect and restrict creation of multiple accounts used for harassment.

    Automatically hiding malicious replies replies that violate guidelines to reduce their visibility in threads.

    Efforts to detect fake or spam accounts rapidly so they can be removed or restricted before they do much harm.

    Moderation Lists & Filters User Controls

    Bluesky allows users to create moderation lists groups of users they want to block or mute all at once. Also lists letting users mute entire Starter Packs or other groups.

    Users can set or sync their moderation preferences across devices. They can also report content or mislabels in posts for example if adult content labels are misapplied.

    Policy Community Guideline Updates

    Bluesky has made recent revisions to its policy guidelines especially after collecting public feedback over 14,000 community members. The new version effective Oct 15, 2025 is organized around principles like Safety First Respect Others Be Authenti Follow the Rules which help clarify what content is moderated removed or penalized.

    Stronger enforcement is promised especially for harassment toxic content and other harmful behavior.

    Verification & Identity Impersonation Prevention

    The blue check verification mark for authentic and notable accounts plus Trusted Verifiers for organizations helps reduce impersonation attacks.

    Preventing abuse through misuse of lists for example Bluesky scans lists e.g. user lists or public lists for abusive names or descriptions. If a list is used to harass via list membership that is addressed.

    Strengthened Moderation Staff & Resources

    Bluesky increased its moderation staff from 25 to 100 to better keep up with user growth and the resulting increase in reportsmalicious content.

    Focused moderation in high-severity policy areas child safety sexual content involving minors harassment to ensure prompt detection and takedown. GIGAZINE

    • Developing advanced algorithms for detecting harmful content.
    • Training moderators to accurately and consistently enforce policies.
    • Implementing user-friendly reporting mechanisms.

    Policy Enforcement Strengthening

    Bluesky’s commitment extends to strengthening the enforcement of its policies. This includes:

    • Swiftly addressing reported violations.
    • Applying appropriate penalties for policy breaches, such as account suspension.
    • Providing clear communication to users about moderation decisions.

    Recent Moves by Bluesky on Moderation & Enforcement

    Bluesky has stated it will more quickly escalate enforcement actions towards account restrictions. Earlier they would give multiple warnings now fewer warnings may be given before deactivating or restricting accounts that violate rules.
    They are also making product changes that clarify when content is likely to violate guidelines giving users better warning beforehand.

    Updated Community Guidelines & Appeals Process

    In August 2025 Bluesky rolled out a massive revamp of its community & safety policies. The changes are meant to improve clarity around rules user safety and how appeals are handled.
    The guidelines are organized around four principles Safety First, Respect Others Be Authentic and Follow the Rules. These help structure decisions about what content must be labeled or removed when accounts may get banned etc.

    Scaling Moderation Capacity

    As of 2024 Bluesky saw a huge jump in moderation reports about 6.48 million reports vs 358,000 in 2023 a 17× increase.

    To cope they’ve expanded the moderation team around 100 moderators and increased hiring.
    Automation is being used more extensively for high-certainty reports spam bots etc. to reduce processing times human moderators still involved for review and dealing with false positives.

    Partnerships & Tools for Safety

    Bluesky partnered with the Internet Watch Foundation IWF to help tackle child sexual abuse material CSAM. This adds external trusted tools and frameworks.
    They are also developing new anti-harassment features e.g. detecting users who make multiple accounts for harassment automatically hiding malicious replies improving spam fake account detection.

    Verification & Trust Indicators

    Bluesky introduced blue checks for notable and authentic accounts. Also added a Trusted Verifier status for certain organizations to authenticate others. This helps with impersonation problems.

    Challenges & Criticisms

    Verifying fundraising or cause-based accounts e.g. in Gaza has been especially hard repeated suspension or being flagged as spam under automated rules.

    Users have raised concerns that automated moderation sometimes leads to false positives unfair deactivations or content being wrongly flagged.
    Some content creators users worry that enforcement may have chilling effects on expression particularly for marginalized voices. Bluesky has said it heard these concerns during feedback on guideline drafts.

  • Gen Z Tactics: Phia’s $8M Fundraising Success

    Gen Z Tactics: Phia’s $8M Fundraising Success

    How Gen Z Raised $8M for Phia Using Modern Methods

    Phoebe Gates and Sophia Kianni harnessed the power of Gen Z engagement strategies to raise an impressive $8 million for Phia. Their approach provides insights into how to effectively mobilize younger generations for philanthropic causes. Understanding these tactics can be invaluable for any organization looking to connect with and engage Gen Z.

    Understanding Gen Z Engagement

    Gen Z’s world revolves around digital platforms and social media. To successfully engage this demographic, campaigns must be authentic, transparent, and easily shareable. Phoebe Gates and Sophia Kianni’s success hinged on their deep understanding of these principles.

    • Authenticity: Gen Z values genuine connections and can quickly identify inauthentic messaging.
    • Transparency: Providing clear and honest information about the cause builds trust.
    • Shareability: Content must be easily shareable across various social media platforms.

    Key Fundraising Strategies

    Several key strategies contributed to the $8 million fundraising success:

    • Social Media Campaigns: Leveraging platforms like TikTok, Instagram, and Twitter to spread awareness and solicit donations.
    • Influencer Partnerships: Collaborating with relevant influencers who resonate with Gen Z audiences.
    • Peer-to-Peer Fundraising: Encouraging individuals to create their own fundraising pages and solicit donations from their networks.
    • Gamification: Introducing game-like elements to make fundraising more engaging and fun.

    Examples of Successful Campaigns

    To illustrate these strategies, consider examples of campaigns that resonated with Gen Z. For instance, viral challenges on TikTok or engaging Instagram stories highlighting the impact of donations can drive significant contributions. Platforms like Classy offer tools to facilitate peer-to-peer fundraising, making it easy for individuals to get involved. Utilizing fundraising thermometers to visualize progress can also motivate further contributions.

    The Role of Technology

    Technology plays a crucial role in modern fundraising. Platforms like Mightycause and others provide nonprofits with tools to manage campaigns, track donations, and communicate with donors effectively. By leveraging these technologies, organizations can streamline their fundraising efforts and maximize their impact.

  • US & China Reach TikTok Deal Framework

    US & China Reach TikTok Deal Framework

    US and China Agree on TikTok Framework Deal

    The United States and China have reportedly reached a preliminary agreement on a framework for TikTok. Specifically this deal aims to address the security concerns surrounding the popular video-sharing app thereby potentially paving the way for its continued operation in the U.S.

    Details of the Framework

    While specific details remain under wraps the framework suggests a pathway for TikTok to operate while simultaneously mitigating data security risks. In particular this could involve measures to ensure U.S. user data remains protected and isn’t accessible to the Chinese government.

    Potential Implications

    • The U.S. and China have agreed in principle on a framework deal that would transfer control of TikTok’s U.S. operations or create a new U.S entity under U.S. ownership.
    • Oracle Silver Lake and Andreessen Horowitz are reportedly among the U.S. investors in the new ownership group. Oracle is expected to continue playing a role managing U.S. user data.
    • A deadline for ByteDance to divest or face a potential ban has been extended until mid-December to allow time to finalize the deal. Politico
    • The framework also touches on national security concerns especially around who controls U.S. user data where it’s stored and control over TikTok’s recommendation algorithm. Some Chinese aspects the algorithm licensing possibly some influence are being preserved licensed under certain terms.

    What This Might Mean Moving Forward

    Here are possible implications and consequences of this agreement:

    1. TikTok likely to stay in the U.S. for now
      The risk of an outright ban seems mitigated if the divestment and ownership changes happen as planned. This gives relief both to users content creators advertisers and other parties that depend on TikTok.
    2. National security concerns addressed but not completely resolved
      While ownership data storage and board control seem to be moving toward U.S. entities control over the algorithm remains one of the key sticking points. How much influence ByteDance might retain or whether algorithm licensing training remains under Chinese control is still under negotiation.
    3. Precedent for U.S. China tech trade deals
      This deal may set a template for how cross-border ownership issues data privacy and national security concerns are handled in future disputes. If successful it could shape policy frameworks for other Chinese tech or social media companies operating in the U.S.
    4. Regulatory and legislative scrutiny ahead
      Even with a deal Congress will likely scrutinize terms closely especially around whether legislation requirements e.g. how much Chinese ownership or influence remains are met. There could be pushback or efforts to tighten rules.
    5. Timeline and operational transition challenges
      The mid-December deadline gives several weeks for negotiations legal structuring possibly changes to how data is handled ownership transferred new boards set up etc. There are risks in transition logistical legal technical e.g. separating algorithm components migrating user data setting up oversight.
    6. Public trust, user experience
      How transparent the deal is e.g. over data policy algorithm changes ownership will influence public trust. If users perceive that data is still accessible or that foreign influence remains there may be criticism. On the flip side if the company can convincingly show U.S. control and strong data protections that could restore confidence.

    What’s Next?

    Congress will likely scrutinize whether the deal meets the divestiture requirements and sufficiently separates Chinese ownership control from U.S. operations.

    Ownership Structure & Control

    TikTok’s U.S. operations are to be transferred to a U.S.-controlled entity. U.S. investors like Oracle Silver Lake a16z are expected to hold about 80% while ByteDance would retain a minority stake 19.9%.

    The U.S. entity will have an American-dominated board including possibly one government-appointed member.

    Deadline Extensions

    The deadline for divestiture enforcing the law has been extended multiple times most recently it’s been extended to December 16 2025 to give both sides more time to finalize the deal.

    Algorithm & Data Handling

    One major sticking point is how much control ByteDance or China will retain over the algorithm. Under current proposals there may be a licensing arrangement the U.S. side wants the algorithm and recommendation system to be independent but China appears to want to preserve some Chinese characteristics or involvement via licensing or other IP arrangements.

    U.S. user data is expected to be stored in the U.S. managed by U.S. entities Oracle is expected to play a central role.

    Security & Regulatory Oversight

    To satisfy U.S. national security concerns the deal is being structured with proper safeguards. This includes oversight mechanisms for data access separation of operations and limitations on foreign influence.

    There is talk of one governing board member being appointed by the U.S. government or oversight from U.S. authorities to ensure standards are met.

    Legal / Legislative Compliance

    The deal must align with the 2024 law passed by Congress PAFACA requiring that foreign adversary-controlled applications like TikTok divest or otherwise resolve U.S. national security risks.

  • Mastodon Enhances Posts with Anti-Harassment Tools

    Mastodon Enhances Posts with Anti-Harassment Tools

    Mastodon Rolls Out Quote Posts with Enhanced Protections

    Mastodon recently introduced a quote post feature, designed with built-in protections to minimize harassment, specifically preventing what is known as ‘dunking’. This update aims to foster a more positive and constructive environment on the decentralized social network.

    Quote Posts on Mastodon: What’s New?

    The new quote post feature allows users to share and comment on existing posts. Unlike similar features on other platforms, Mastodon’s implementation includes measures to discourage abusive behavior. This reflects Mastodon’s commitment to user safety and community well-being.

    Protections Against ‘Dunking’

    Mastodon has implemented specific safeguards to prevent ‘dunking,’ which refers to the act of publicly ridiculing or attacking someone’s post. These measures include:

    • Visibility Controls: Users can control who sees their quote posts, limiting potential exposure to unwanted interactions.
    • Moderation Tools: Enhanced moderation tools help community moderators quickly address and resolve instances of harassment.
    • Reporting Mechanisms: Improved reporting mechanisms enable users to easily flag abusive content and behavior.

    The Impact on the Mastodon Community

    The introduction of quote posts with these protections marks a significant step in Mastodon’s ongoing efforts to create a safer and more inclusive social network. By prioritizing user safety and implementing thoughtful features, Mastodon continues to distinguish itself from other social media platforms.

  • Stop Autoplay: Taming Your Social Media Feeds

    Stop Autoplay: Taming Your Social Media Feeds

    Tired of Autoplay? Control Your Social Feeds

    Annoyed by videos that automatically start playing as you scroll through your social media feeds? You’re not alone! Autoplay can be disruptive, consume data, and generally detract from your browsing experience. Fortunately, most platforms offer options to disable or customize this feature. Let’s explore how to take control of your feeds.

    Turning Off Autoplay on Facebook

    Facebook’s autoplay settings are relatively easy to find and adjust. Here’s how:

    • On Desktop: Navigate to Settings & Privacy > Settings > Videos.
    • On Mobile: Tap the menu icon (three horizontal lines), then scroll down to Settings & Privacy > Settings > Media.

    Once you’re in the video settings, you can choose from the following options:

    • Auto-Play: Select ‘Off’ to completely disable autoplay.
    • On Mobile Data: Choose to only allow autoplay when you’re connected to Wi-Fi. This can help save your mobile data.

    Disabling Autoplay on Twitter/X

    Twitter, now known as X, also allows you to manage video autoplay. Here’s how to change it:

    • On Desktop: Click ‘More’ in the left-hand menu, then Settings and privacy > Accessibility, display, and languages > Data usage > Autoplay.
    • On Mobile: Tap your profile icon, then Settings and support > Settings and privacy > Data usage > Autoplay.

    You can then select ‘Never’ to disable autoplay completely or choose ‘Wi-Fi only’.

    Managing Autoplay on Instagram

    Instagram’s autoplay settings are linked to your data usage. Adjust these settings to control autoplay:

    • On Mobile: Go to your profile, tap the menu icon (three horizontal lines), then Settings > Account > Cellular Data Use.

    Enable ‘Use Less Data’. This might prevent videos from autoplaying when you’re on cellular data. Keep in mind that Instagram doesn’t offer a direct ‘turn off autoplay’ option, but reducing data usage can help mitigate it.

    YouTube Autoplay Controls

    YouTube has a slightly different approach. It lets you control the autoplay of the *next* video, rather than the video in your feed. Here’s how:

    • On Desktop & Mobile: On the watch page, you’ll see an autoplay toggle switch. Simply turn it off to prevent the next video from automatically playing.

    This setting is account-specific, so you’ll need to adjust it on each device where you use YouTube.

  • Meta Enhances Community Notes with Correction Alerts

    Meta Enhances Community Notes with Correction Alerts

    Meta Boosts Community Notes with New Alert Features

    Meta is rolling out fresh updates to its Community Notes feature, designed to enhance the accuracy and transparency of fact-checking on its platforms. These updates include alerts for corrected posts, aiming to keep users informed when notes significantly alter the context of a post.

    Alerts for Corrected Posts

    One of the key additions is the introduction of alerts that notify users when a Community Note has substantially changed the context of a post they’ve seen. This ensures people are aware of corrections and updated information, promoting a more informed online environment.

    How It Works:

    • When a Community Note receives a high helpfulness rating and significantly changes the interpretation of a post, an alert will appear.
    • This alert directly informs users who have previously viewed the original post about the updated context.

    Enhanced Transparency and Accuracy

    Meta’s commitment to combating misinformation is evident in these improvements. By proactively notifying users of corrections, Meta aims to:

    • Reduce the spread of inaccurate information.
    • Increase trust in the platform’s content.
    • Empower users with accurate context.

    Future Improvements

    Meta plans to continue iterating on Community Notes, exploring new ways to make fact-checking more effective and user-friendly. This includes refining the alert system and expanding the reach of Community Notes to more users and content types. This also shows company’s dedication to promoting accurate and reliable information across its platforms.

  • Bluesky Navigates Age Verification Laws: Updates

    Bluesky Navigates Age Verification Laws: Updates

    Bluesky Adapts to Age Verification Laws

    Bluesky, the decentralized social network, is making strategic adjustments to comply with evolving age verification regulations. After exiting Mississippi, the platform will now adhere to the age-verification laws in both South Dakota and Wyoming. This move reflects Bluesky’s commitment to navigating the complex regulatory landscape while ensuring user safety and compliance.

    Compliance in South Dakota and Wyoming

    Bluesky will implement measures to verify the age of its users in South Dakota and Wyoming. These laws typically require platforms to confirm that users are of a certain age before allowing them access to specific content or features. By complying, Bluesky aims to protect younger users from potentially harmful material and align with local legal requirements.

    Exit from Mississippi

    Bluesky’s decision to exit Mississippi preceded its compliance moves in South Dakota and Wyoming. The reasons for leaving Mississippi haven’t been specified, but this strategic shift highlights the challenges social networks face in adapting to varying state laws.

    Implications for Users

    Users in South Dakota and Wyoming can expect changes in their platform experience as Bluesky implements age verification processes. These may include:

    • Requests to verify age through acceptable identification methods.
    • Restrictions on access to content for users who cannot verify their age.
    • Adjustments to platform features to ensure compliance with state laws.

    Navigating the Regulatory Landscape

    Bluesky’s actions underscore the increasing importance of regulatory compliance for social media platforms. As governments worldwide introduce new laws governing online content and user data, companies must proactively adapt their policies and practices to remain compliant and protect their users. This also highlights the need of cyber and network security.

  • Reddit Unveils Tools for Publishers: Track & Share Stories

    Reddit Unveils Tools for Publishers: Track & Share Stories

    Reddit Empowers Publishers with New Tracking and Sharing Tools

    Reddit is rolling out new tools designed to help publishers track and share their stories more effectively on the platform. These features aim to provide publishers with greater insights into how their content performs and make it easier to engage with the Reddit community.

    Enhanced Story Tracking

    The core of the update is focused on giving publishers better data. With these new tools, publishers can now monitor:

    • The reach of their articles.
    • The engagement levels within relevant subreddits.
    • Overall performance metrics to fine-tune their content strategy.

    Streamlined Sharing Options

    Beyond tracking, Reddit is also simplifying the process for publishers to share their content. The updated tools include:

    • Direct sharing options to specific subreddits.
    • Customizable post previews to attract more attention.
    • Integrated analytics to measure the impact of each shared story.

    Why This Matters

    These enhancements are significant because they provide publishers with actionable data and streamlined workflows. By understanding how their content resonates with Reddit users, publishers can refine their approach to better connect with audiences. This benefits both publishers and the Reddit community by fostering more relevant and engaging discussions.

  • Sam Altman on Social Media Bots and Authenticity

    Sam Altman on Social Media Bots and Authenticity

    Are Bots Making Social Media Feel Fake? Sam Altman Thinks So

    Sam Altman, CEO of OpenAI, recently shared his perspective on a growing concern: the impact of bots on the authenticity of social media. He suggests that the increasing presence of automated accounts contributes to a feeling of artificiality in online interactions. This observation sparks important questions about the future of social platforms and the measures we might need to preserve genuine connections.

    The Rise of Bots

    The proliferation of bots on social media platforms isn’t news. These automated accounts serve various purposes, from marketing and customer service to spreading information (or misinformation). While some bots provide valuable services, others engage in activities that degrade the user experience, such as:

    • Spreading spam and phishing links
    • Amplifying propaganda and disinformation
    • Artificially inflating follower counts
    • Manipulating trending topics

    Altman’s Perspective

    Altman’s comments highlight a deeper concern about the erosion of trust and authenticity online. When a significant portion of interactions are driven by bots, it becomes difficult to discern genuine human voices from programmed responses. This can lead to a sense of disconnect and cynicism among users.

    The Impact on Social Media Platforms

    Social media platforms are grappling with the challenge of combating bots. Identifying and removing these accounts is an ongoing battle, requiring sophisticated algorithms and constant vigilance. Some strategies platforms employ include:

    • Improving bot detection algorithms
    • Implementing stricter account verification processes
    • Enforcing clear policies against bot activity
    • Providing users with tools to report suspicious accounts

    The Future of Authenticity Online

    Addressing the issue of bots is crucial for maintaining the integrity of social media. As AI technology continues to advance, it’s likely that bots will become even more sophisticated and difficult to detect. Moving forward, a multi-faceted approach involving technological solutions, policy changes, and user education will be necessary to preserve authenticity and foster genuine online connections. It will require a combined effort from platforms, users, and developers to ensure the future of social media remains human-centric.

  • Snap Reorganizes Teams as Ad Revenue Growth Slows

    Snap Reorganizes Teams as Ad Revenue Growth Slows

    Snap Reorganizes Teams as Ad Revenue Growth Slows

    Snap is undergoing a strategic shift, breaking into what they’re calling ‘startup squads’ as the company faces headwinds in ad revenue growth. This reorganization aims to foster innovation and agility within the social media giant.

    Why the Restructuring?

    The primary driver behind this move is the need to reignite growth in Snap’s advertising revenue. Recent financial reports highlight a slowdown, prompting the company to explore new operational models. By creating smaller, more focused teams, Snap hopes to unlock new revenue streams and better compete in the dynamic social media landscape.

    What are ‘Startup Squads’?

    These ‘startup squads’ are essentially small, cross-functional teams that operate with a high degree of autonomy. Each squad focuses on a specific product or feature, with the goal of rapidly iterating and launching new innovations. This approach mirrors the lean startup methodology, emphasizing speed, experimentation, and customer feedback.

    • Agility: Smaller teams can make decisions faster and adapt quickly to changing market conditions.
    • Focus: Each squad has a clear mission and a dedicated set of resources.
    • Innovation: Empowering teams to experiment and take risks can lead to breakthrough ideas.

    Implications for Snap’s Future

    This reorganization represents a significant shift in Snap’s approach to product development and innovation. By embracing a more decentralized and agile model, Snap aims to:

    • Accelerate Product Development: Get new features and products to market faster.
    • Improve User Engagement: Create more compelling and engaging experiences for Snapchat users.
    • Drive Revenue Growth: Unlock new advertising opportunities and diversify revenue streams.