First Entry
First Entry
First Entry
templates Try and get bluerobot's template to work with my CSS on BLOG.... Others have done it.
CommuniGate Pro -- An advanced mail server geared for a multiplatform enterprise setup Review of Communigate Pro
GLMail: Secure, Scalable, and Linux Based This is the product used by Sean McMullan.
ThorNet: MovableType Help Archives discusses installing a spell checker for Moveable Type.
802.11a, Wireless LAN, 802.11b Wireless LAN Equipment
Hi all!
During the meetings for Espaces/TODOS I made some interesting contacts to potential partners:
Espaces: Piotr Fuglewicz, Logotec Engineering, Poland
Logotec is an European-wide company offering products for information management (knowldge mgmt, crm). They are Microsoft Gold certified. Subsidiaries are in Poland, Germany, USA, Swiss, Belgium, and Italy. www.logotecengineering.com
Espaces: Karl A. Hribernik (austrian/scotish), BIBA plt, Bremen, Germany BIBA plt is part of University Bremen, focused on logistic and telematik. They are active in FP5 and FP6. www.biba.uni-bremen.de
TODOS: Gonzola Quiles Albert, INDRA, Madrid, Spain
INDRA is a big (6000 employees) consulting agencies active in telecommunications, defense equipment, and more. Markets are Western Europe and South America. They lead the 'Mobility/Terminal' working group in TODOS www.indra.es
TODOS: Oliver Sharpe, Director of Runtime Collective, Brighton, UK Runtime Collective is a consulting company that uses only open source technology. They are strong in JAVA and Web Services with activities in database applications (means applications using databases extensively).
Areas: crm, project management software, intranets, streaming media. www.runtime-collective.com
TODOS: Gudrun Magnusdottir (Managing Director)/Marita Asunmaa, ESTeam, Greece ESTeam has provided automatic translation solutions for corporate and institutional clients. Goal is to provide the best translation environment for both interactive and automatic translation scenarios. www.esteam.gr
TODOS: Stefan Salz, Fraunhofer AIS, Bonn, Germany
AIS is specialized in autonomous intelligent systems, from robots up to CSCW software. Strong cooperation with FhG FOKUS for eGovernment and eDemocracy www.ais.fraunhofer.de
If there is anything interesting for you I can give detailed contact information and arrange whatever you need.
mfg
vdm
Hugo recommended this to me last night: Seven Beauties (1975) - ForeignFilms.com by Pasqualino Settebellezze.
raelity bytes an RSS aggregator which can work with Moveable Type. Must Install.
Fantasy Sports Portal: UK Fantasy Sports & Soccer
SIP Compression Patent United States Patent Application: 0030013431 mainly based on work done in WMA!
Presentation to BSc IT 2005-03-07 14:15 WIT Auditorium:
Download PPT file
MT-Redland
Parrot CK3300 Advanced GPS Bluetooth Car Kit
Parrot CK3300 Advanced GPS Bluetooth Car Kit
SkypeHeadset - Skype plugin for Bluetooth headsets
Widcomm Bluetooth drivers
Siobhan Ryan (CORE)
Elaine Sheridan (WIT)
Maedhbh Brosnan (WIT)
Payment:
Every Week, separate from weekly payroll (significant processing speed improvement)
Cost Centres:
Need to enter when submitting
Manger may reject, but individual needs to make change
Receipts:
Hope to allow scanning of the receipts
Approver:
Can select any approver from the list - no sanity check that that approver has authority over that budget!
Only one chosen, but any valid person may be chsen so aslo allows flexibility.
Kilometers/Subsistence:
Notes field by
Put in travel as a receipt - for flat rate approval
Check for insurance - no longer recorded - but is a declaration section
Receipts:
Preset types of receipt
Can enter flights/train/conference/hotel ... paid directly by budget to show full cost, without claiming it
Can print form and staple recipts and process manually via approver
Click on link, login, click on view trips to be approved, click on one to approve, approve
Suggestion: use URL to allow single click with background authentication
Change in ammount will require reapproval
No change in ammount requires no further approval!!!!!
All approvers can report on all budgets
How to recreate your iTunes Library
How to recreate your iTunes Library
Learn how to recreate your iTunes library and playlists in iTunes versions 3 and later.
Recreating the iTunes Library file
Follow these steps:
1.
2. Quit iTunes.
3. Locate your iTunes folder.
* For Mac OS X the iTunes folder is stored in one of the following locations:
/Users/username/Music
/Users/username/Documents
Note: You may need to check both locations.
* For Microsoft Windows the iTunes folder is stored in
\Documents and Settings\username\My Documents\My Music\
4. Open your iTunes folder.
5. Drag the iTunes Music Library.xml to the Desktop
6. Drag the following file from your iTunes folder to the Trash:
*
* Mac OS X: "iTunes Library" (in versions of iTunes prior to 4.9 this was called "iTunes 4 Music Library").
* Microsoft Windows: "iTunes Library.itl" (in versions of iTunes prior to 4.9 this was called "iTunes 4 Music Library.itl")
7. Open iTunes.
8. From the File menu, choose Import.
Tip: Do not add any music into iTunes at this point.
9. Navigate to the iTunes Music Library.xml file on the Desktop.
10. Click Choose.
For steps to backup and restore playlists see "iTunes: How to backup and restore playlists."
If your podcasts list in iTunes is empty after following these steps, click here.
Note: If the iTunes Music Library.xml file is not available, follow the "Add to library" steps in "iTunes: About the Add to Library, Import, and Convert functions" to add files back to the Library (because the iTunes Music Library.xml file is not available, playlists and other information will not be available).
Here's a really interesting article: Space Race Exhibition
Terry Turner MSA On-Line Forum: Features
1000 EUR per year plus cost of box and installation
David D. Clark
About Motorola Technology-People-John Strassner
Fellow of the Technical Staff & Director of Autonomic Networking
Science Foundation Ireland: Grants and Awards:SFI Strategic Research Cluster (SRC) Programme
SFI Strategic Research Cluster (SRC) Programme
SFI Strategic Research Clusters (SRCs) will help link scientists and engineers in partnerships across academia and industry to address crucial research questions, foster the development of new and existing Irish-based technology companies, and grow partnerships with industry that could make an important contribution to Ireland and its economy.
The SRC programme has been designed to facilitate the clustering of outstanding researchers to carry out joint research activities in areas of strategic importance to Ireland (in ICT and/or BioTech sectors), while also giving the time and resources to attract and cultivate strong industry partnerships that can inform and enhance their research programmes.
Objectives of the SRC Programme
* Create clusters of internationally-competitive researchers from academia and industry, particularly Irish-based industry.
* Exploit opportunities in science and engineering where the complexity of the research agenda requires the advantages of synergy, scale and shared resources that clusters of research partners can provide.
* Facilitate the development of new research partnerships and the strengthening of existing partnerships between academic and industrial researchers.
* Build interdisciplinary links among researchers.
* Create awareness among academic-based researchers of industrial road maps and research goals.
* Support excellence in research as measured by international merit review.
Funding
SRC grants will be awarded for periods of three years with possible extension for an additional two years following successful scientific and strategic progress review.
Grants will normally range from €500,000 to €1,500,000 direct costs per year over the five (3+2) year period. The budget requested should be appropriate to number of participating PIs within the clusters, the experience and track record of the PIs and the scale of the research programme to be undertaken.
Downloads
The following documents describe the SRC programme in detail:
Call for Proposals (doc) (pdf)
Application Process (doc) (pdf)
Pre-Proposal Application Form (doc) (pdf)
Full Proposal Application Form (doc) (pdf)
Deadlines
Expression of interest submission Deadline passed
Information meeting 21 August, 2006
Pre-Proposal submission 3 November, 2006
Invitations to submit full proposals Mid-December, 2006
Full Proposal submission 16 March, 2007
Information Meeting
The joint CSET/SRC Information Day took place on Monday 21st August. The presentations given at the meeting are available here:
Overview of SRC Programme (pdf)
Overview of CSET Programme (pdf)
Overview of SFI IP Guidelines (pdf)
IDA Briefing (pdf)
EI Briefing (pdf)
Developing a CSET: Lessons Learned (pdf)
FAQ
Updated FAQs for SRC program are available here. (word) (pdf)
General questions relating to Grants and Awards are available here.
The Universe of Discourse : Design patterns of 1972
Design patterns of 1972
Bill de hモra: Web services: Rosencrantz and Guildernstern are not dead
Web services: Rosencrantz and Guildernstern are not dead
Hibernate with the MacBook Pro
alias hibernateon='sudo pmset -a hibernatemode 1'
alias hibernateoff='sudo pmset -a hibernatemode 0'
From: "Eugene Crehan"
Date: 5 September 2007 19:01:18 IST
To: "Barry Downes"
Subject: Fwd: Corporate Accommodation Rate at the IMI Residence in Sandyford, Dublin South.
Dear Colleagues,
I have negotiated a Corporate Rate for WIT for accommodation at the IMI Residence. The rate is €79.00 per night and is inclusive of breakfast & VAT plus free Broadband & parking.
To avail of this rate; please follow instructions below. Please let me have feedback of your stay at the IMI Residence.
Regards
Eugene
>>> "Deirdre Grealy"
Good afternoon
further to my telephone conversation with Eugene yesterday, I just wanted
to drop you a quick email to inform you that should you have any corporate
accommodation requirements in Dublin, here in the IMI we offer excellent
quality accommodation with free broadband in every room. Each room also
has a spacious work/study area, direct dial, tea/coffe-making facilities,
TV and power shower. We are open 24 hours, have ample parking free of
charge and provide tranquil and peaceful surroundings.
Our corporate rate is €93 p/night p/room, however after speaking with
Eugene, I am prepared to offer you and your colleagues a rate of €79
p/room p/night. This includes breakfast and VAT.
Please find attached a .pdf document with information on our Residence as
well as a corporate rate agreement. Please sign the rate agreement and
return to my by fax so that for all future bookings and on mentioning WIT,
you will automatically receive the lower and agreed rate of €79 p/room.
Should you have any queries, please let me know. I look forward to hearing
from you.
Rgds
Deirdre
Deirdre Grealy
Corporate Sales Executive
Tel: 2078478
Mobile: 086 6099767
Interoperability Happens - Can Dynamic Languages Scale?
Can Dynamic Languages Scale?
The recent "failure" of the Chandler PIM project generated the question, "Can Dynamic Languages Scale?" on TheServerSide, and, as is all too typical these days, it turned into a "You suck"/"No you suck" flamefest between a couple of posters to the site.
I now make the perhaps vain attempt to address the question meaningfully.
What do you mean by "scale"?
There's an implicit problem with using the word "scale" here, in that we can think of a language scaling in one of two very orthogonal directions:
1. Size of project, as in lines-of-code (LOC)
2. Capacity handling, as in "it needs to scale to 100,000 requests per second"
Part of the problem I think that appears on the TSS thread is that the posters never really clearly delineate the differences between these two. Assembly language can scale(2), but it can't really scale(1) very well. Most people believe that C scales(2) well, but doesn't scale(1) well. C++ scores better on scale(1), and usually does well on scale(2), but you get into all that icky memory-management stuff. (Unless, of course, you're using the Boehm GC implementation, but that's another topic entirely.)
Scale(1) is a measurement of a language's ability to extend or enhance the complexity budget of a project. For those who've not heard the term "complexity budget", I heard it first from Mike Clark (though I can't find a citation for it via Google--if anybody's got one, holler and I'll slip it in here), he of Pragmatic Project Automation fame, and it's essentially a statement that says "Humans can only deal with a fixed amount of complexity in their heads. Therefore, every project has a fixed complexity budget, and the more you spend on infrastructure and tools, the less you have to spend on the actual business logic." In many ways, this is a reflection of the ability of a language or tool to raise the level of abstraction--when projects began to exceed the abstraction level of assembly, for example, we moved to higher-level languages like C to help hide some of the complexity and let us spend more of the project's complexity budget on the program, and not with figuring out which register needed to have the value of the interrupt to be invoked. This same argument can be seen in the argument against EJB in favor of Spring: too much of the complexity budget was spent in getting the details of the EJB beans correct, and Spring reduced that amount and gave us more room to work with. Now, this argument is at the core of the Ruby/Rails-vs-Java/JEE debate, and implicitly it's obviously there in the middle of the room in the whole discussion over Chandler.
Scale(2) is an equally important measurement, since a project that cannot handle the expected user load during peak usage times will have effectively failed just as surely as if the project had never shipped in the first place. Part of this will be reflected in not just the language used but also the tools and libraries that are part of the overall software footprint, but choice of language can obviously have a major impact here: Erlang is being tossed about as a good choice for high-scale systems because of its intrinsic Actors-based model for concurrent processing, for example.
Both of these get tossed back and forth rather carelessly during this debate, usually along the following lines:
1. Pro-Java (and pro-.NET, though they haven't gotten into this particular debate so much as the Java guys have) adherents argue that a dynamic language cannot scale(1) because of the lack of type-safety commonly found in dynamic languages. Since the compiler is not there to methodically ensure that parameters obey a certain type contract, that objects are not asked to execute methods they couldn't possibly satisfy, and so on. In essence, strongly-typed languages are theorem provers, in that they take the assertion (by the programmer) that this program is type-correct, and validate that. This means less work for the programmer, as an automated tool now runs through a series of tests that the programmer doesn't have to write by hand; as one contributor to the TSS thread put it:
"With static languages like Java, we get a select subset of code tests, with 100% code coverage, every time we compile. We get those tests for "free". The price we pay for those "free" tests is static typing, which certainly has hidden costs."
Note that this argument frequently derails into the world of IDE support and refactoring (as its witnessed on the TSS thread), pointing out that Eclipse and IntelliJ provide powerful automated refactoring support that is widely believed to be impossible on dynamic language platforms.
2. Pro-Java adherents also argue that dynamic languages cannot scale(2) as well as Java can, because those languages are built on top of their own runtimes, which are arguably vastly inferior to the engineering effort that goes into the garbage collection facilities found in the JVM Hotspot or CLR implementations.
3. Pro-Ruby (and pro-Python, though again they're not in the frame of this argument quite so much) adherents argue that the dynamic nature of these languages means less work during the creation and maintenance of the codebase, resulting in a far fewer lines-of-code count than one would have with a more verbose language like Java, thus implicitly improving the scale(1) of a dynamic language.
On the subject of IDE refactoring, scripting language proponents point out that the original refactoring browser was an implementation built for (and into) Smalltalk, one of the world's first dynamic languages.
4. Pro-Ruby adherents also point out that there are plenty of web applications and web sites that scale(2) "well enough" on top of the MRV (Matz's Ruby VM?) interpreter that comes "out of the box" with Ruby, despite the widely-described fact that MRV Ruby Threads are what Java used to call "green threads", where the interpreter manages thread scheduling and management entirely on its own, effectively using one native thread underneath.
5. Both sides tend to get caught up in "you don't know as much as me about this" kinds of arguments as well, essentially relying on the idea that the less you've coded in a language, the less you could possibly know about that language, and the more you've coded in a language, the more knowledgeable you must be. Both positions are fallacies: I know a great deal about D, even though I've barely written a thousand lines of code in it, because D inherits much of its feature set and linguistic expression from both Java and C++. Am I a certified expert in it? Hardly--there are likely dozens of D idioms that I don't yet know, and certainly haven't elevated to the state of intuitive use, and those will come as I write more lines of D code. But that doesn't mean I don't already have a deep understanding of how to design D programs, since it fundamentally remains, as its genealogical roots imply, an object-oriented language. Similar rationale holds for Ruby and Python and ECMAScript, as well as for languages like Haskell, ML, Prolog, Scala, F#, and so on: the more you know about "neighboring" languages on the linguistic geography, the more you know about that language in particular. If two of you are learning Ruby, and you're a Python programmer, you already have a leg up on the guy who's never left C++. Along the other end of this continuum, the programmer who's written half a million lines of C++ code and still never uses the "private" keyword is not an expert C++ programmer, no matter what his checkin metrics claim. (And believe me, I've met way too many of these guys, in more than just the C++ domain.)
A couple of thoughts come to mind on this whole mess.
Just how refactorable are you?
First of all, it's a widely debatable point as to the actual refactorability of dynamic languages. On NFJS speaker panels, Dave Thomas (he of the PickAxe book) would routinely admit that not all of the refactorings currently supported in Eclipse were possible on a dynamic language platform given that type information (such as it is in a language like Ruby) isn't present until runtime. He would also take great pains to point out that simple search-and-replace across files, something any non-trivial editor supports, will do many of the same refactorings as Eclipse or IntelliJ provides, since type is no longer an issue. Having said that, however, it's relatively easy to imagine that the IDE could be actively "running" the code as it is being typed, in much the same way that Eclipse is doing constant compiles, tracking type information throughout the editing process. This is an area I personally expect the various IDE vendors will explore in depth as they look for ways to capture the dynamic language dynamic (if you'll pardon the pun) currently taking place.
Who exactly are you for?
What sometimes gets lost in this discussion is that not all dynamic languages need be for programmers; a tremendous amount of success has been achieved by creating a core engine and surrounding it with a scripting engine that non-programmers use to exercise the engine in meaningful ways. Excel and Word do it, Quake and Unreal (along with other equally impressively-successful games) do it, UNIX shells do it, and various enterprise projects I've worked on have done it, all successfully. A model whereby core components are written in Java/C#/C++ and are manipulated from the UI (or other "top-of-the-stack" code, such as might be found in nightly batch execution) by these less-rigorous languages is a powerful and effective architecture to keep in mind, particularly in combination with the next point....
Where do you run again?
With the release of JRuby, and the work on projects like IronRuby and Ruby.NET, it's entirely reasonable to assume that these dynamic languages can and will now run on top of modern virtual machines like the JVM and the CLR, completely negating arguments 2 and 4. While a dynamic language will usually take some kind of performance and memory hit when running on top of VMs that were designed for statically-typed languages, work on the DLR and the MLVM, as well as enhancements to the underlying platform that will be more beneficial to these dynamic language scenarios, will reduce that. Parrot may change that in time, but right now it sits at a 0.5 release and doesn't seem to be making huge inroads into reaching a 1.0 release that will be attractive to anyone outside of the "bleeding-edge" crowd.
So where does that leave us?
The allure of the dynamic language is strong on numerous levels. Without having to worry about type details, the dynamic language programmer can typically slam out more work-per-line-of-code than his statically-typed compatriot, given that both write the same set of unit tests to verify the code. However, I think this idea that the statically-typed developer must produce the same number of unit tests as his dynamically-minded coworker is a fallacy--a large part of the point of a compiler is to provide those same tests, so why duplicate its work? Plus we have the guarantee that the compiler will always execute these tests, regardless of whether the programmer using it remembers to write those tests or not.
Having said that, by the way, I think today's compilers (C++, Java and C#) are pretty weak in the type expressions they require and verify. Type-inferencing languages, like ML or Haskell and their modern descendents, F# and Scala, clearly don't require the degree of verbosity currently demanded by the traditional O-O compilers. I'm pretty certain this will get fixed over time, a la how C# has introduced implicitly typed variables.
Meanwhile, why the rancor between these two camps? It's eerily reminiscent of the ill-will that flowed back and forth between the C++ and Java communities during Java's early days, leading me to believe that it's more a concern over job market and emplyability than it is a real technical argument. In the end, there will continue to be a ton of Java work for the rest of this decade and well into the next, and JRuby (and Groovy) afford the Java developer lots of opportunities to learn those dynamic languages and still remain relevant to her employer.
It's as Marx said, lo these many years ago: "From each language, according to its abilities, to each project, according to its needs."
Oh, except Perl. Perl just sucks, period. :-)
PostScript
I find it deeply ironic that the news piece TSS cited at the top of the discussion claims that the Chandler project failed due to mismanagement, not its choice of implementation language. It doesn't even mention what language was used to build Chandler, leading me to wonder if anybody even read the piece before choosing up their sides and throwing dirt at one another.
ITworld.com - How to learn to love simple electronic filing systems
How to learn to love simple electronic filing systems
ITworld 1/25/2007
Sean McGrath, ITworld.com
There is a general malaise in this industry that centers around the word "manage". If you have word processor documents and spreadsheets and presentations just lying around on file systems, there is a tendency to think that they are not being managed as well as they could be. And what better way to manage data than with a (gasp!) - a database management system?
I regularly encounter situations where office-like information is bludgeoned and squeezed and pummeled into odd shapes and odd workflows in order to meet the need to "manage" the information effectively using database management systems.
Now please don't get me wrong. I am not an information anarchist. I believe passionately in information management. But, I'm also a believer in using the right tool for the right job. It is a fact of life unfortunately that tools featuring the word "management" in their classification are often more attuned to the management needs of relational data rather than office-like document information.
One fear that often drives users down the road of introducing a database for office-like document management is the fear of not being able to find things again. I used to have this fear with respect to my paper filing system. "How", I wondered, "should I label these paper filing cabinet folders so that I can find these things again?". "Should this invoice be filed under 'house' or 'insurance' or 'invoices' or 'personal' or something else?"
I finally escaped from this paralyzing classification fear by realizing that it doesn't really matter. For any given item, the number of reasonable choices for filing it are small. Rather than worry about finding the perfect one, just pick a reasonable one! When it comes to finding it again, you may need to rummage to 2-3 folders to find it but that is a better use of your time than waiting for the (non-existent) perfect filing taxonomy to drop out of the sky for you.
I do exactly the same thing when filing office-style documents electronically. I use folder structures and file naming conventions. I go with the first reasonable option that comes into my head for any given electronic filing task. It might take me 2-3 shots to find it in the future but again, that is a better use of my time than endlessly searching for the (non-existent) perfect filing taxonomy to drop out of the sky.
That is step one on the road to pragmatic, simple information management.Step two is to ensure that you cannot inadvertently lose important information. To do that, I alway keep my folder structures safely ensconced inside a revision control system such as Subversion. Using a revision control system kills three birds with one stone: centralized backup, protection against inadvertent modifications and support for collaboration.
There are many other bells and whistles that can be added to this basic setup but a surprising amount of high quality information management can be achieved with just this much.
One little extra I like to throw in. By using a file system in this way, I end up with a ready made system of unique identifiers for my information assets. The full path to any document is a unique identifier. It is very useful to be able to use these identifiers as programmable identifiers inside computer programs. To do that, I like to restrict file and folder names to simple alpha-numerics so that they can easily be turned into valid identifiers in common programming languages.
Good old fashioned file systems have much to offer as information management tools. It is well worth thinking through your requirements to see if you actually, really truly need a database in order for your content to be managed. It is possible that you do, but it is also very possible that you do not. Databases have no monopoly on the word "manage".
Annual subscriptions to InfoSci-Journals (formerly IGI Full-Text Online Journal Collection) for colleges and universities are based on the institution’s relevant FTE – meaning the number of likely users of a database in the covered subject areas – rather than the full FTE. The following pricing structure is applied:
InfoSci Journals — Annual Site License
(Includes Perpetual Access to Subscribed Content Years)
Level RELEVANT FTE 2008 Price
1 Under 5,000 $4,500
Thunderbird Help: Tips & Tricks
Check all IMAP folders for new mail
Thunderbird can download mail from all accounts when you start the program. Just open the Config Editor, search for the preference mail.check_all_imap_folders_for_new, and change its value to true
lsb_release -a
An interesting debate has started up in Ireland in the past few months on the justification for Ireland's investment in Science, Engineering and Technology (SET), now more usually called Science, Technology and Innovation (STI) to emphasise the importance of the potential exploitation of the results of research by relevant industries.
This page aims to catalogue some primary inputs to this debate, and does not of itself espouse a particular position. For that see other articles I have published on Ireland and STI. This blog does not have comments enabled, so if you wish to comment please email me at mofoghlu attt tssg.org -- I am happy to add additional content as long is it available publicly on-line.
Malware Statistics for a sample of Irish Hosting Companies - Red Cardinal
http://google.com/safebrowsing/diagnostic?site=
AUSTIN, Texas, March 9 /PRNewswire/ -- Buffalo Technology, a global leader in the design, development and manufacturing of wired and wireless networking and network and direct attached storage solutions announced the next step in the partnership with NewMedia-NET to deliver DD-WRT based software as a standard configuration across Buffalo's array of high power routers and access points. DD-WRT, long a mainstay in the open source community, delivers an easy-to-use, versatile and extensive feature-set to a broader wireless networking audience. From the novice user to demanding professionals, this partnership provides best-in-class products for a wide range of consumers.
"Buffalo has always been on the bleeding edge of technological innovation, and incorporating NewMedia-NET's DD-WRT software solution into our high-end routers and access points is a natural evolution," said Ralph Spagnola, vice president of sales at Buffalo Technology. "With DD-WRT, we now deliver professional grade solutions at entry-level prices."
Serving millions of users worldwide, DD-WRT is the leading Linux based alternative open source firmware for wireless routers, enabling basic entry level equipment to act like enterprise products. With DD-WRT, Buffalo's lineup of high power routers will support professional features like VPN (PPTP, OpenVPN), VLAN (tagging), Virtual AP (multi-SSIDs for multi-connection and security), RADIUS server, hot spot support, volume quotas, iPv6 support, detailed monitoring and a host of other high-end features. Additionally, installation will be headache free with an easy setup wizard and 24/7 US-based toll free technical support.
"We are proud and excited to share this opportunity to introduce the DD-WRT experience to a broader audience with a leading global provider such as Buffalo Technology, known for exceptional hardware quality and reliability," said Sebastian Gottschall, CTO NewMedia-NET GmbH and founder of the DD-WRT project. "Providing DD-WRT as a factory installed firmware for Buffalo's line of AirStation? High Power Routers enables users to unlock a host of professional features never previously seen in the SOHO and consumer market."
Buffalo's new DD-WRT enabled high power router and access point offering will include WZR-HP-G300NH, WHR-HP-G300N and WHR-HP-GN, all delivering three-in-one functionality that is unmatched in the market. These new wireless solutions can either function as a high power router and access point, a wireless bridge or as a universal range extender. Uniquely, when operating as a universal range extender, all of Buffalo's DD-WRT enabled wireless solutions can connect to any router, regardless of brand, greatly extending the range of wireless coverage, eliminating 'dead spots'.
Pricing and Availability
WHR-HP-G300N and WHR-HP-GN will be available in May 2010 while WZR-HP-G300NH will be available in July. All three units are backed by a limited two-year warranty that includes toll-free, US based 24/7 technical support.
About Buffalo Technology
Buffalo Technology (USA), Inc., based in Austin, Texas, is a leading global provider of award-winning wireless networking, external storage, multimedia and NAS solutions for the home and small business environments as well as for system builders and integrators. With almost three decades of networking and computer peripheral experience, Buffalo has proven its commitment to delivering innovative, best-of-breed solutions that have put the company at the forefront of infrastructure technology. For more information about Buffalo Technology and its products, please visit www.buffalotech.com.
Buffalo, Inc. trademark statements. Buffalo is a trademark of Buffalo, Inc. All other trademarks mentioned herein are the property of their respective owners.
Read more: http://www.earthtimes.org/articles/show/buffalo-and-dd-wrt-collaborate-to,1197760.shtml#ixzz0hyV58Fmx
IP Infusion Announces Innovative Tunneling Technologies for Coexisting IPv4 and IPv6 Networks
SUNNYVALE, Calif., March 9 /PRNewswire/ -- IP Infusion, an ACCESS company and provider of intelligent software for Next Generation Network (NGN) equipment manufacturers and converged IP service providers, today announced innovative tunneling technologies that enable the coexistence of IPv4 and IPv6 networks. Since its inception in 1999, IP Infusion has pioneered the development of solutions for IPv4 and IPv6 technologies, and has carried out extensive testing with Japanese carriers. IP Infusion is making its solution available immediately for operators and Internet service providers as ZebOSョ Rapid Deployment.
The IPv6 protocol was developed to support the growing need for additional IP addresses brought about by the decreasing availability of address space for 32 bit for IPv4. Due to the worldwide growth of the Internet and the increasing use of information appliances, especially in ASEAN countries, available IPv4 addresses are expected to run out in the very near future. IPv6 addressing provides an important solution as an alternative to IPv4, and has brought about the diversification of IP address environments. Operators urgently need not only to introduce IPv6 addresses, but also to manage their networks in mixed IPv4 and IPv6 environments.
The telecommunications industry has been exploring a variety of solutions, such as network address sharing technologies and network translation technologies. The comprehensive tunneling technologies for coexisting IPv4 and IPv6 networks developed by IP Infusion offer an immediate solution for operators.
IPv6 over IPv4 Tunneling Solution
IP Infusion's ZebOS Rapid Deployment forwards IPv6 traffic though IPv4 networks and is based on 6rd (IPv6 rapid deployment) specifications which are published as a Request for Comments (RFC) by the Internet Engineering Task Force (IETF). The proposed IETF 6rd solution utilizes stateless IPv6 in IPv4 encapsulation in order to transit IPv4-only network infrastructure, and can achieve high scalability by leveraging stateless tunneling technology. IP Infusion's ZebOS Rapid Deployment also provides an accounting function which manages user traffic for enabling Internet services, and a filtering function which differentiates users -- both important functions in order for carriers to deploy a new IPv6 service by using 6rd.
BBIX Inc. currently provides an Internet exchange service in Japan and plans to launch an IPv6 roaming service for other Internet service providers based on IP Infusion's ZebOS Rapid Deployment.
"We intend to be a leader in IPv6 transition by leveraging IP Infusion's ZebOS Rapid Deployment solution," said Keiichi Makizono, Director of the board for BBIX Inc. "IP Infusion is one of the most important partners for BBIX in driving IPv6 adoption."
"With the growing number of worldwide providers adopting IPv6, support for this routing protocol in Next Generation Metro Networks is a must," said Koichi Narasaki, president and CEO of IP Infusion. "As networks transition to IPv6, legacy support for IPv4 is imperative. IP Infusion's revolutionary ZebOS Rapid Deployment solution enables this transition for network providers."
About ACCESS
ACCESS CO., LTD. is a global company providing leading technology, software products and platforms for Web browsing, mobile phones, wireless handhelds, digital TVs and other networked devices. ACCESS' product portfolio, including its NetFront? Browser, ACCESS Linux Platform? and Garnet? OS, provides customers with solutions that enable faster time to market, flexibility and customizability. The company, headquartered in Tokyo, Japan, operates subsidiaries and affiliates in Asia, Europe and the United States. ACCESS is listed on the Tokyo Stock Exchange Mother's Index under the number 4813. For more information about ACCESS, please visit http://www.access-company.com/.
About IP Infusion
IP Infusion Inc. delivers advanced software solutions that power communications equipment for packet-based Next Generation Networks (NGN). With a unique modular architecture and the industry's broadest suite of communication protocols, IP Infusion enhances product differentiation and market agility for many of the world's leading network equipment vendors. Incorporated in Delaware in October 1999, IP Infusion is headquartered in Sunnyvale, California, and is a wholly owned and independently-operated subsidiary of ACCESS Systems Americas, Inc., a wholly owned U.S. subsidiary of ACCESS CO., LTD., of Tokyo, Japan. For more information about IP Infusion, please visit www.ipinfusion.com.
ゥ 2010 ACCESS CO., LTD. All rights reserved.
ACCESS, the ACCESS logo, NetFront, ACCESS Linux Platform and Garnet are registered trademarks or trademarks of ACCESS CO., LTD. in the United States, Japan and/or other countries.
IP Infusion and ZebOS are either registered trademarks or trademarks of IP Infusion Inc. in the United States and/or other countries.
The registered trademark LINUXョ is used pursuant to a sublicense from Linux Mark Institute, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis.
All other trademarks, logos and trade names mentioned in the document are the property of their respective owners.
SOURCE IP Infusion Inc.
Seven reasons IPv6 is overhyped :: SearchNetworking.com.au
Posted
Mar 17, 2010
| By: Ivan Pepelnjak
Seven reasons IPv6 is overhyped
Tools: Print article RSS Feeds
Bookmark and Share
With the looming IPv4 address exhaustion expected to take place around the time the Mayan calendar runs out, IPv6 networking has started to attract the serious attention of the network masses. When people consider unknown technologies, a number of myths usually arise, most of them completely unfounded. IPv6 networking is no exception. Its evangelists and detractors have been propagating numerous "facts" that might have had some basis in reality a while ago but evolved into pure myths as they spread around the Internet. Let's take a look.
Myth #1: IPv6 networking provides service/location separation
Reality: Totally bogus.
A broken protocol stack and a broken reference implementation are among the biggest issues the Internet is facing today. Both require an application to take a service name, translate it into a network address and establish a connection to that address. Burdened with a transport protocol (TCP) that still lives in the dial-up world, the applications simply cannot cope with a service that is available on multiple network locations.
The IPv6 networking protocol could have solved this problem if its architects hadn't limited themselves to the single goal of extending address length. In its current incarnation, IPv6 gives us a longer address and nothing more.
Myth #2: IPv6 will simplify multihoming
Reality: Missed opportunity.
The designers of IPv6 took multihoming seriously and developed a protocol in which a single host can easily acquire multiple IPv6 addresses, even from address spaces belonging to multiple upstream service providers. Unfortunately, they've never tested their theories in real life. Having multiple IPv6 addresses does not help if the upper layers cannot use them efficiently (see previous myth).
Technologies that could support efficient multihoming with IPv6 are already available (SHIM and SCTP, for example) but not widely used because it's easier for everyone to grab a provider-independent (PI) chunk of address space and pollute the global Internet routing tables.
Without an extra layer between the IPv6 addresses and the applications, the multihoming of e-commerce servers in the IPv6 world remains identical to IPv4 multihoming, and providing resilience to smaller client sites actually gets harder because IPv6 does not have Network Address Translation (NAT).
Myth #3: IPv6 will reduce IP routing tables and BGP problems
Reality: Missed opportunity.
The architects of IPv6 envisioned a strictly hierarchical address space in which every service provider would get huge amounts of address space and advertise only a few prefixes into the global routing tables. Unfortunately, they've never considered the high-availability requirements of e-commerce.
The Internet Engineering Task Force (IETF) had 15 years to address multihoming issues but failed to do so (see the previous myth). The only solution available to anyone who wants to be somewhat independent of a single service provider is to get a chunk of PI address space, run the border gateway protocol (BGP) and advertise the PI prefix to the global Internet. If anything, the routing tables will grow exponentially with the introduction of IPv6, as everyone will try to get PI address space.
BGP will fare even worse. Not only will the size of the IPv6 global routing table increase, IPv6 BGP tables use more space (and more bandwidth) than the corresponding IPv4 BGP tables. Last but not least, you should also consider what happens in the IPv6 transition period, when the routers will have to carry both IPv4 and IPv6 prefixes for the same set of end-user equipment.
Myth #4: IPv6 has better Quality of Service (QoS)
Reality: Obsolete.
IPv6 packet headers have a flow field designed to identify individual flows, which might be useful on low-speed links. On a decently fast link, you're forced to use class-based QoS (DiffServ) which uses DSCP field in the packet header, as the flow-based QoS (IntServ) does not scale. DSCP field is available in both IPv4 and IPv6 headers.
Myth #5: IPv6 has better security
Reality: Not true.
IPSec might be better integrated in IPv6 headers, but there's nothing you can do with IPv6 IPSec that you cannot do with IPv4 IPSec.
Myth #6: IPv6 is required for mobility
Reality: No longer true.
When IPv6 was designed, IPv4 did not provide any IP mobility features. The lack of IPv6 networking deployment has prompted the development of IPv4 mobility solutions. Today, it's not hard to implement IPv4-based mobility. It is true, however, that the explosive growth of mobile devices requires enormous amounts of address space that cannot be provided with the IPv4 addresses.
Myth #7: Residential IPv6 is less secure because it does not require NAT
Reality: Ignorance.
Some engineers think that the NAT commonly used in residential CPE devices provides extra security owing to obfuscation of actual IP addresses of the hosts behind the CPE device. Enterprise-grade NAT implementations (available, for example, in Cisco IOS) provide security somewhat equivalent to a stateful packet inspection, but consumer-grade NAT available in most CPE devices does not.
Scanning the IPv6 address space looking for vulnerable hosts (a common hacker pastime) is totally useless in the IPv6 networking world. Using the current best practices, each consumer will get the equivalent of a billion's worth of today's Internet's addresses. Even if your workstation sits behind an unprotected CPE, finding it from afar would be quite a feat.
Furthermore, every modern operating system contains basic firewall capabilities (for example, the ability to block unwanted incoming sessions) that to some degree augment the functionality provided by CPE devices.
Last but not least, if residential security becomes an issue, the market will force even the low-cost CPE vendors to implement some basic filters to protect the end users.
Configure DNS lookups from the terminal - Mac OS X Hints
set State:/Network/Service/PRIMARY_SERVICE_ID/DNS
Don't install the parental management stuff!
This page contains an archive of all entries posted to MSOF Private in the Personal note: work category. They are listed from oldest to newest.
Personal note: home is the previous category.
Many more can be found on the main index page or by looking through the archives.