lim: Mulder holding a finger to his mouth in a "hmmmm" type fashion (hm finger)
it's ok i can fix it ([personal profile] lim) wrote in [community profile] newslettering2010-04-18 08:54 pm

Yahoo Pipes

Has anyone else played with Yahoo Pipes to gather content? I've made a rudimentary pipe to try out: True Blood Pipe.

My thinking was it could easily aggregate content from youtube, wordpress, livejournal, ao3, etc, and it does seem to be easier to manipulate than manually syndicating sites to a watcher journal flist, plus you can filter content for relevance/keywords etc.

There is a PHP output option so conceivably it could be autoformatted, bypassing delicious entirely? I don't know! Anyone got any insight?
writerlibrarian: (Default)

[personal profile] writerlibrarian 2010-04-18 08:20 pm (UTC)(link)
I've been using Yahoo Pipes for at least 3 years now at work to filter book reviews feeds from diverse sources.

It's quite easy to tailor to your needs and you can take out doubles, sort the results your way, then you subscribe to the feed of your pipe through your favourite feed reader.

Mine is Netvibes which has a public and private option so I can also share the results of the yahoo pipe through my public Netvibes for other librarians to use.

I used these to figure out how to tailor the data :

http://www.mrspeaker.net/2007/02/10/yahoo-pipes/?year=2007&monthnum=02&day=10&page

Some other tutorials : http://pipes.yahoo.com/pipes/docs?doc=tutorials

My Yahoo pipes for book reviews updated last in 2008 but created in April 2007 : http://pipes.yahoo.com/pipes/pipe.info?_id=1sBLERri2xG3oWfCjknRlg

Hope it helps
murklins: Indistinct background of newspaper or book page, with three black bird shapes in bottom right corner of foreground. (news)

[personal profile] murklins 2010-05-10 03:08 pm (UTC)(link)
I finally got a chance to play with pipes in the context of your question and arghhhhh, I kept trying to make it do what seemed like totally reasonable things and it kept rejecting me at every turn. Like, why can I not give it the OPML file of my friends list? If I had a newsletter with shared administration, I wouldn't want everyone messing directly with the pipe, but I also wouldn't want them to have to email me every time they came across a new journal to track for content. Much easier to simply friend and have the new feed automatically picked up by the pipe via the OPML file. BUT NO. Trying to do that caused error after error.

Another thing I thought might be awesome: pull in only recent, FRESH links from Delicious. Most of the links in a tag's feed are old -- people bookmark the popular things over and over again. So I wouldn't want to muddy up my newsletter with old stuff from Delicious; I want to have my pipe filter those links out. TALK ABOUT IMPOSSIBLE. This is more Delicious' fault than Pipes though -- probably I should go tell Del that they seriously need to include first posted date info in their tag feeds (and also everywhere on the site).

But enough of my frustrations, let's talk about what you actually wanted to know, which is how to automatically run your pipe and convert the output to a journal post. Pretty sure this is doable! (But I'm afraid to test it, in case once again I am proven totally wrong.) First, you need to set some date limitations on your pipe, so that you can pass in a date (or date range) and it will filter everything down by that criterion. For a daily newsletter, you want your pipe to spit out only 24 hours' worth of links. Then you would write a little PHP script that runs your pipe, pulls in the results, formats them as HTML and posts them up at the journal of your choice. Finally, you set that script up to run automatically on a schedule and voila!

Problems I see with this approach to newslettering is the lack of any kind of categorization or link vetting. I think I'd be more inclined to make my PHP script pull the pipe results and post each link to Delicious automatically. Then I would have a human go through the links, vetting each one and probably applying a tagging scheme -- from there I would run a second script to scrape the vetted links from Delicious and convert them to a grouped and sorted list formatted in HTML.