Friday, July 24, 2009

Google Library Project: Following the Copyright Debate

4th Reading Assignment:

Kaushik, Anant, and Nishant Prakash.. "Google Library Project: Following the Copyright Debate." ICFAI Journal of Intellectual Property Rights 8.1 (Feb. 2009): 74-80. Academic Source Complete.
EBSCO. [Library name], [City], [State abbreviation]. 21 July 2009 http://search.ebscohost.com
/login.aspx?direct=true&db=s8h&AN=36350168&loginpage=Login.asp&site=ehostlive&scope=si te.

Abstract:

The Google Library Project is an effort by Google to scan and make the collection of several major libraries namely Harvard, Standford, Oxford, Michigan, and the New York Public Library searchable in the Internet. Bibliographic information will be available together with “snippets” which are few lines of the book in conjunction with the search query. Full-text of books out of copyright will also be available for free download. Unfortunately, due to the confidential nature of this project, Google was slapped with two copyright lawsuits. This article presents the background of the Project and checks the events that lead to the lawsuits filed against Google. The “fair use” doctrine along with the four non-exclusive factors is also discussed. This doctrine was the defense use by Google to counteract the copyright infringement claim against them. The Project’s Opt-Out Approach was also explained and some suggestions were given to highlight the importance of authors who are the main complainants of the project. Finally, it presents conclusion which directs on suggestions to further enhance the current copyright legislation.

Three things I learned:

1. I was able to be familiarized with the “fair use” doctrine and the four non-exclusive factors. Also through this article, I was able to learn additional information about the Copyright Law.

2. That the Google Library Project is an ambitious yet a very useful project. If ever this project pushes through I think it can be considered as a “Universal OPAC”.

3. Also discussed in the article are various cases on copyright infringement which is very similar to the case against Google. It is interesting to know about this cases and how they use the “fair use” doctrine as a defense against possible copyright infringement.

Reflections:

I believe that Google have clean intentions in creating this library project. Although there is no doubt that there would be financial gains through advertisements and sponsorships I think at the end of the day, of all this will be overshadowed by the help that this project will give to its potential users. In my opinion, this project should have not been filed with two lawsuits if Google did not use the Opt-Out approach. According to this approach, copyright holders included in the project shall notify Google if they do not want their work to be included in the project. We should remember how important royalties are to writers of copyrighted materials. By virtue of doing this type of approach as well as other vital reasons certainly legal actions would be filed.
Just an update, as of now the Google Library Project is still in its beta stage. Google is currently awaiting the verdict of the legislators on the cases filed against them.

Friday, July 17, 2009

Library 2.0 Theory: Web 2.0 and its Implications for Libraries

3rd Reading Assignment

Maness, Jack M. “Library 2.0 Theory: Web 2.0 and its Implications for Libraries” Webology
vol. 3, no. 2. 15 July 2009. http:www.webology.ir/2006/v3n2/a25.html.

Abstract:

The article introduces “Library 2.0” by providing a clarified definition and theory about the term, as well as discussing who coined it, how it evolved and its impact on the current and future practice of Librarianship. These data were also provided in order to eliminate the ambiguity about the term which was brought by various discussions that are often broad and extensive. Furthermore, in order to support the theory, different web technologies and applications such as synchronous messaging, blogs, wiki’s, streaming media, social networks, RSS feeds, and mashups are presented.

3 Things that I learned:

1.Because of this article, I have acquired a more in depth knowledge on Library 2.0.

2.Prior to reading this article I never knew that there is a social networking site called “LibraryThing”.

3.I learned new web technologies and applications such as Mashups and Tagging.

Reflections:

It was just fascinating to experience how the World Wide Web has evolved over the years, and how it continues to affect the different fields of study. In the case of Librarianship, we started automation by using the simple OPAC with limited features and capabilities. Now, we already have some OPAC using some web 2.0 technologies and applications. Change is never constant; it will come and should always be welcomed.

Friday, July 10, 2009

Hard Facts about Software Piracy

2nd reading assignment

Source:
Crittenden, William F., Christopher Robertson, and Victoria Crittenden. "Hard facts about software piracy." Business Strategy Review 18.4 (Winter2007): 30-33. Business Source Complete. EBSCO. [Library name], [City], [State abbreviation]. 10 July 2009 http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=27448322&loginpag e=Login.asp&site=ehost-live&scope=site.

Abstract:
This article discusses the growing problem of Intellectual Property on Computers, particularly focusing on Software Piracy. It also tackles the cost associated with piracy as well as identifying who does it the most and how this problem can be controlled. The different types of Software Piracy are discussed and a definition on the main topic is also presented. One of the highlights of the article is the discussion on “pirates versus buccaneers”. Software Piracy is definitely illegal and should not be tolerated in any ways. However “Buccaneers” or the supporters of the free era software movements believe on software sharing and the limitation of copyright protection. Legalities on software piracy are also discussed as well as the variation of piracy levels in different regions of the world. Finally, preventive measures such as legal actions, education programmes, copy protection techniques which can decrease software piracy are presented.

Things that I learned from the article:
1. That software piracy has different types.
2. That using pirated software increases risk of virus attacks and corruption
to computer system. Lack of technical support, warranties, and inadequate
documentation are also the drawbacks of using illegal software.
3. That not only economic factor increases the level of piracy in a country.
Culture should also be seen as one cause. According to the article, if people
around you use pirated software most likely you will also be using the same
software.
4. Countries who are advocate of open source or free software also have higher
levels of Software Piracy. Corrupt countries also have relatively high
rate/levels of piracy.
5. Information about recreational piracy and end-user software buccaneering.

Reflections:
Software Piracy has been one of the major problems worldwide, and I believe that Filipinos are pretty much aware of it. Of all the topics on computer ethics, I think that the violation on Intellectual Property particularly on Software Piracy is a critical and sensitive topic to discuss. It is critical since software piracy is illegal. However if one considers the notion of “everybody does it”, “recreational piracy” or “end-user software buccaneering”, then they might have a point in legitimately using the software. At the same time, this topic is also sensitive because any opinions against or in support of it might immediately give you a head turn or a raised eyebrow, especially if you are in the Philippines.

Overall I agree with the closing statement of this article where it suggests that issuing global standards is not the only way to reduce software piracy. One must also consider the economical and cultural ideological differences. Studying the philosophical framework affecting software piracy might eventually lead us to the most effective way of eradicating this global problem.

Thursday, July 2, 2009

1st Reading Assignment

Topic: Information Retrieval


"Information discovery and retrieval tools"

by: Michael T. Frame


Source:


Frame, Michael T. “Information discovery and retrieval tools” Information Services & Use

24.4 (2004) : 187-193. EBSCOHost. EBSCOHost Connection. 01 July 2009

http://search.ebscohost.com/.


Exact URL of the article:

http://web.ebscohost.com/ehost/detail?vid=7&hid=104&sid=7136e648-868b-4f35-b857-77d4641f3ed5%40sessionmgr2&bdata=JmxvZ2lucGFnZT1Mb2dpbi5hc3Amc2l0ZT1laG9zdC1saXZlJnNjb3BlPXNpdGU%3d#db=ehh&AN=16872295

EBSCOHost is part of the online database subscription of the Ateneo de Manila University, from where I viewed and downloaded the article. You might not be able to access the full-text version if you do not have the required subscriptions.

_____________________________________________________________________________________


Abstract:

In 2004 it was estimated that 10 billion web pages exist on the World Wide Web. With that huge number, it is very evident why users particularly those who have casual knowledge in using the Internet find it very difficult to search and retrieve relevant and accurate information. One of the many solutions to this problem is the effective use of different search engines available in the web. This article focuses on how search engines works, as well as enumerating different features and capabilities of the tool. Tips on how web developers and creators can improve the discovery of their web content were also discussed. One of the aims of this discussion is to prevent the web developers in doing intentional tricks such as search engine spamming, which will only brought problem to web searches of users. Furthermore, various terminologies such as SPAM, metatags, and spidering methodology were also introduced along with the presentation of simple search techniques for Internet users. All of this was provided to minimize the problem of populated search results as well as improve the search experiences and information retrieval of World Wide Web users.


Reflections:

I think that the article that I selected actually help me to be familiarized with some of the various terminologies related to web search and retrieval. What is interesting about this article is that it gives us a more detailed look on search engines, which is one of the tools use in information retrieval. Yes, we do use google, yahoo, altavista, askjeeves, etc practically all the time. But do we know what is happening behind the search? probably not.

We casual Internet users might find this question uninteresting or irrelevant since what we ultimately want is to search the web and retrieve the information we need. But after reading the article and learning the discussions on spam, metatags, a search engine model and how a search engine actually sees a search, it made me realize the importance of acquiring technical knowledge on web retrieval tools and also how it can actually help casual users in their search and retrieval experience.

We actually do not need to be technical expert on this field. Being able to distinguish the difference of a spam page to an actual relevant web page is already a big leap for all of us.


Three things I learned from the article:

  1. Honestly, my knowledge on SPAM is very limited. Usually I encounter SPAM on electronic mails which take form into different advertising and promotional messages. After reading the article, I learned that there are actually SPAM in web pages and that web developers are actually using it to increase the number of hits on their page. This is a tricky way to improve Internet Business. As well all know the number of hits usually gives a site more popularity and thus more revenues. This is one of the primary reasons why we always encounter a number of irrelevant results during a web search.


  1. How search engine works and its different features and capabilities that can be use to retrieve the right information. Some of the features are the use of case sensitive and insensitive searching, Boolean search capabilities, result weighting feature, simple search interface, use of remote indexing, etc.

    Today, Internet users are provided with so many search engines and that selecting the effective one becomes a problem. But if we take into account these features and capabilities, we will surely be successful in choosing the right one.


  1. Importance of metatags and why it should be embedded in an HTML document. I also learned a different kind of metatag, called “Common name”. It is a custom meta-tag that can actually help further narrow an already narrowed search results.


Conclusions:

This article focuses more on the tools and terminologies important in Information Retrieval and not directly on techniques that can be utilized to improve one’s search. However, upon reading the article, readers will certainly know how to effectively use search engines and select which tool to use based on the presence of different features and capabilities. In the end, all this information will surely lead the user to successful Internet Retrieval.