« - »

SES Chicago 2009 – PageRank for People Presentation

9 December 2009

Here’s a copy of the deck I presented this week at SES Chicago on ‘PageRank for People and Distributed Reputation Systems’. Feel free to Comment or Tweet any questions. Links to the full version including my speaking points is at the bottom of the post.

Download full PPT including speaker notes


2 Tweets 14 Other Comments

12 Responses to ' SES Chicago 2009 – PageRank for People Presentation '

Subscribe to comments with RSS or TrackBack to ' SES Chicago 2009 – PageRank for People Presentation '.

  1. bored said,

    on December 15th, 2009 at 9:33 am

    doesn’t matter. google should not be relied upon for everything we do.
    new smart applications will eventually evolve to handle the large volume of new data and sort it out into the information we want when we need it.
    What is frustrating now is the volume of stories which aren’t identical but clearly have a similar source yet are written very different. how to deal with those stories & blogs is tough even for humans as they may contain subjective arguments that may also need to be added to the discussion of some event?

    This comment was originally posted on ReadWriteWeb

  2. John said,

    on December 15th, 2009 at 9:44 am

    A return to the subscription business model seems inevitable at this point. People make fun of Rupert Murdoch – he is an easy target – but he’s on to something. The amount of garbage and disinformation makes for a less interesting internet.

    This comment was originally posted on ReadWriteWeb

  3. Don Marti said,

    on December 15th, 2009 at 10:13 am

    Remeber PageRank? Lately, it seems like Google has been putting too much weight on which results can attract clicks from regular users, and less weight on which pages have incoming links. The reason most of us switched to Google in the first place was the algorithm that mostly relied on people who are capable of making a link, not on tricks in the page content. (The rise of blogging fed lots of links to Google quickly.)

    Ditch the JavaScript click tracking on the Google SERPs and Demand Media would go away.

    This comment was originally posted on ReadWriteWeb


  4. on December 15th, 2009 at 11:18 am

    [...] Track reputation against authors rather than URLs – a ‘PageRank for People’; Marshall Clark. [...]

  5. simon said,

    on December 15th, 2009 at 2:52 pm

    asking the cloud to build quality needs strict rules (success example: wikipedia)… and strong antibodies against each little conflict of interest generated by content… and knol is still alive..

    This comment was originally posted on ReadWriteWeb

  6. MSLOVER said,

    on December 15th, 2009 at 2:52 pm

    Goog ain’t gonna do zip…these folks make people click the almighty adword. If Demand Media is raking is close to 250mm a year imagine the Big G revenue from them. Do no evil, but let others do it, we don’t mind.

    This comment was originally posted on ReadWriteWeb

  7. J. Lenley said,

    on December 15th, 2009 at 4:26 pm

    Google works hand in hand with Demand media on youtube for instance — basically that’s Google basically co-owning a content service business via advertising.

    http://www.lenley.com

    This comment was originally posted on ReadWriteWeb

  8. Ed Borasky said,

    on December 16th, 2009 at 12:22 am

    I took a look at Aol’s Seed.com today. From what I saw, I don’t think it’s fair to characterize it as a "content farm" or any of the other pejorative terms that have been applied over the past few days. Now maybe it will turn out to be a source of irrelevance in searches and low-quality high-quantity content, but for now, I’m giving them the benefit of the doubt.

    I submitted two articles. Both were on topics that Seed.com suggested, not something I picked. And both were about Linux from the perspective of users. At this point I have no idea whether they’ll publish them or not, but I did not get the impression that there was anything spammy about it. I’ll keep you all posted on how it works out.

    This comment was originally posted on ReadWriteWeb


  9. on December 16th, 2009 at 8:00 am

    The big problem here is defining "good" quality. What is good for you could be bad for me.

    I think the future lies in incorporating algoritms using social networks and a "lens of friends" in the search results. I trust my friends more than someone I don´t know. I trust certain people more in certain areas and other people in other. The people I trust differ from the people you trust and what it boils down to in the end is personalized search and a personal view of quality.

    To some extent we are already there, and the direction is clear imo. As Steve Rubel puts it:

    http://www.steverubel.com/three-observations-from-le-web

    "nowadays no two people see the same Internet"

    JMHOFWIW.

    This comment was originally posted on ReadWriteWeb


  10. on December 16th, 2009 at 2:46 pm

    A great follow-up post to the other one.
    But there’s a bit of a dichotomy for Google. If they make search more efficient, won’t that lower their revenues?
    On the other hand, we could foresee a divergence of search methods: page-ranked, attention/social-ranked, raw real-time, quality-curated, etc…

    This comment was originally posted on ReadWriteWeb

  11. drdavehale said,

    on December 17th, 2009 at 8:25 am

    Great article. This must be a hot topic today as I have read similar posting elsewhere. Yes, I like you know that not all of Demand Media’s articles are of poor quality. How do I know? I write for them. If some articles are poor, why then would a client looking to purchase an article buy a poor one only to put it on their website?

    I post articles I write to many directories and freelance project clients. EzineArticles.com for sure keeps me on my toes more than some companies who are paid services.

    Dr. Dave Hale
    The Internet Marketing Professor
    http://drdavehaleonline.com

    This comment was originally posted on ReadWriteWeb


  12. on December 17th, 2009 at 9:55 pm

    @William is proposing some good thoughts on divergence of search methods

    Why does Google create 100 algorithms working in many different ways – social, peoplerank, socialrank, retweet rank…. etc?

    Why not let the user choose the algorithm they want?

    Why doesn’t Google open source their [pagerank] algorithm?

    Let the world contribute to improving the quality of search results. Create 1000x algorithms… Make it transparent. If it’s transparent, there is no such thing as "gaming", or "baiting". The quality of the algorithm will rise based on the community investing to ensure high quality results.

    Content Farms just make this problem happen sooner, choking low quality content was was going to happen at some point in the future anyway. These content farms are just making it happen now.

    We need new organisational structures to exist to handle a world with so much content.

    This comment was originally posted on ReadWriteWeb

Leave a reply

Additional comments powered by BackType