: Convert dynamic websites into static and change all links to relative
NOTE: If you are doing translations, you only need to translate the .ini files.
no
Trond Johansen
trond#j-data.no
http://www.j-data.no
Vennligst send epost om forslag og/eller korrigeringer.
2007-05-22 14:29:15
A1 Sitemap Generator
1.4.8
Count {dynCount}
Vekt {dynCount}
Cruncher data (kan ta noe tid)
Lenke
Kilde
Direkte
Ukjent
Lenket fra {dynCount} sider
Linke til {dynCount} sider
Brukt fra {dynCount} sider
Lokalisert {dynCount}
Beskrivelse
Responskode
Type
Referert fra
Omdirigert fra
Frase
Websted
Vekt
Count
Count
intern stopp-ord
Side
Sted
Noen søkemotorer har policyer og måler mot posisjonssjekking.<NL><NL>Husk at det er mulig å editere og legge til nye søkemotorer for sjekking.<NL>Det er også mulig å konfigurere hvordan det skal søkes, for eksempel ventetid mellom forespørsler osv.<NL>Dette programmet fremtvinger noe ventetid mellom forespørsler for å forhindre "overlast" på søkemotorer <NL><NL>Alle søkemotorer blir sjekket samtidig. Det betyr at "ventetid" har lite effekt på total hastighet.<NL><NL>Posisjonssjekk nå?
Noen servicer har policyer og måler mot ikke-manuell bruk.<NL><NL>Husk at det er mulig å editere og legge til nye servicer for sjekking.<NL>Det er også mulig å konfigurere bruk, for eksempel ventetid mellom forespørsler etc.<NL>Dette programmet fremtvinger noe ventetid mellom forespørselstidspunkter for å forhindre "overlast" på tjenester.<NL><NL>Alle tjenester er sjekket samtidig. Dette betyr at "ventetid" har liten effekt på total hastighet.<NL><NL>Motta forslag nå?
Du har valgt å bruke "eksterne verktøy" under scanning.<NL><NL>Vennligst vær klar ovat at noen av disse kan ha policyer og måler mot ikke-manuell bruk.<NL>Dette programmet fremtvinger noen begrensninger på bruk (antall forbindelser etc.) for å forhindre "overlast".<NL><NL>Fortsette?
Denne fila har fått navn med en annen konvensjon enn forventet.<NL><NL>Normal mønster på navn er "{dynDirFilePathName}".<NL><NL>Bruke denne fila?
Du må gjennomføre et websteds scan først.<NL><NL>Når du lagrer og laster prosjekter, inkluderer dette også alle data relatert til og hentet fra scan av websteder.
Lagrinsgkatalogen for sitemap filsti "{dynDirFilePathName}" eksisterer ikke.<NL><NL>Forsøke å lagre katalog nå?
Kunne ikke lage "{dynDirFilePathName}".<NL><NL>Dette kan være på grunn av manglende eller This could be due to missing or utilstrekkelige skrivetillatelser.
Det virker som at valgt "rot sti" ikke er listet i "rot sti aliaser" lista.<NL><NL>Fjern gamle "alias" verdier?
Lenker
scannet / listet
Sitemap
Intern
Ekstern
Nytt prosjekt
Åpne prosjekt...
Åpne prosjekt
Ctrl+O
Lagre prosjekt
Lagre prosjekt
Ctrl+S
Lagre prosjekt som...
Oppdater fil
Oppdater fil
Skriveroppsett...
Skriv ut...
Skriv ut
Ctrl+P
Avslutt
Eksporter som fil...
Eksporter data i valgt kontroll til en fil
Importer fra fil...
Importer webstedsdata fra fil
Klipp ut
Klipp ut valgt tekst
Ctrl+X
Kopier
Kopier valgt tekst
Ctrl+C
Lim inn
Lim inn tekst fra utklippstavle
Ctrl+V
Slett
Slett valgt tekst
Ctrl+Del
Velg alt
Velg all tekst i aktiv kontroll
Ctrl+A
Angre
Angre siste forandring
Ctrl+Z
Søk...
Finn tekst
Ctrl+F
Finn neste
Finn neste
F3
Erstatt tekst...
Erstatt tekst
Ctrl+R
Bytte tekstbryting
Formatter og stripp mellomrom
Fjern kommentarer
Legg til rad
Legg til rad etter valgt
Ctrl+Alt+R
Sett inn rad
Sett inn rad før valgt
Ctrl+Alt+I
Legg til under
Legg til under i valgt
Ctrl+Alt+C
Slett rad
Slett valgt rad
Ctrl+Alt+E
Flytt rad opp
Flytt valgt rad opp
Ctrl+Alt+U
Flytt rad ned
Flytt valgt rad ned
Ctrl+Alt+D
Sorter
Sorter
Ekspander alle
Slå sammen alle
Åpne data etter scan av websted
Alltid åpne laget sitemaps
Lagre analysedata kun som XML filer
Dersom du ønsker å bruke noen av søskenprogrammene til dette prosjektet, ha dennne muligheten uten markering.
Hurtig lagring/lasting av analysedata filer
Foreslått dersom du ønsker å vise/redigere XML filer
Prosjekt lagring/lasting inkluderer data fra websteds analyser
Brukbar dersom du senere ønsker å laste prosjekt og se data hentet via kravling av websted
Begrenser lagre/lasting av webstedsanalyse data til "sitemap"
Brukbar dersom du kun er interessert i filer innen "sitemap""
Forrige
Viser forrige side
Shift+Ctrl+Z
Neste
Vis neste side
Shift+Ctrl+X
Oppdater
Bortsett fra ved redigering av kilde, vil dette slette mellomlager og oppdatere innholdet
F5
Navigasjons viser
Assosierer "forrige, neste og oppdater" med innlagt kontroller. (For eksempel Internet Explorer vindu)
Ctrl+Alt+N
Funnede elementer
Svarkode
Svartid
Nedlastingstid
Filstørrelse
Filscore
Filscore skalert
Filvaliderings feil
Online verktøy
Kan være et godt supplement til innebygde SEO verktøy
Ctrl+Alt+T
Internet Explorer
Opera
FireFox
LesMeg...
Viser LesMeg fila
Program logg
Viser program loggen
Hjelp...
Viser vedlagt hjelp
F1
Tips...
Vis tips
Kjøp nå...
Skriv koden...
Send tilbakemelding
Besøk websted
Om...
A1 Sitemap Generator
Hjemmeside for sitemap generator verktøyet
A1 Website Analyzer
Hjemmeside for webiste analyzer verktøyet
A1 Keyword Research
Hjemmeside for keyword research verktøyet
A1 Website Download
Hjemmeside for website download verktøyet
Struktur
Stiler
Gjenopprett standarder
Produkt hjelpefil (disk)
http://www.example.com
http://microsys.localhost
Template sitemap : HTML
Template sitemap : XML / XSL
Template sitemap : PHP
Google sitemap : XML
Gjenopprett standarder
example.com
&Fil
&Rediger
&Tabell
&Vis
&Valg
&Hjelp
Gjenåpne prosjekt
Scan websted
Analyser websted
Undersøk nøkkelord
Vis websted
Lag sitemap
Last opp sitemap
Prosjekt info
tsDevCtrls
Oppdater
Hold nedtrykt når du ønsker å "oppdatere" websteds side data istedet for å foreta en komplett scan
Fortsett
Hold nedtrykt når du ønsker å "fortsette" websteds crawl istedet for å gjennomføre en komplett scan
Start scan
Avbryt scan
Hurtiginnstillinger...
Vis varierte konfigurasjonseksempler (kan øke hastigheten når du lager nye prosjekt)
Bane
Konverterings valg
Søkemotor
Søkemotor valg
Søkemotor identifisering
Søkemotor filter
Utgående filter
Data samling
Eksterne verktøy
Konverter lenker
Konverter til relative lenker for å lese filer offline på lokal disk
Konverter til relative lenker for opplasting til en HTTP webserver / websted
Ingen konverering
Automatisk påvisning er basert på type rot-bane
Standard innholds type
Dersom søkemotoren oppdager en ukjent lenke / referanse "innhold" vil den gå tilbake til standard
HTTP (internett og localhost)
Lokal disk / UNC
Auto påvis
Auto påvisning er basert på type rot-bane
HTTP proxy innstillinger
I de fleste tilfeller, kan du ignorere proxy innstillinger.
DNS navn / IP adresse
Port
Brukernavn
Passord
Antall 1/1000 sekunder å vente før "forbindelse" går ut på tid
Antall 1/1000 sekunder å vente før "lese" går ut på tid
Antall 1/1000 sekunder mellom feilet og ny forbindelse blir prøvd
Antall forsøk på forbindelse til en ressurs før oppgivelse
Lagre omdirigeringer, lenker fra og til alle sider, statisitkker etc. data
Brukt for visning hvor filer er lenket, omdirigert etc. fra. Denne dataen er også brukt for å forbedre varierte kalkulasjoner
Lagre funnede eksterne lenker
Det kan av og til være nyttig å vise funnede "eksterne" lenker
Lagre titler for alle sider (krever noe minne)
Websted katalog rot-bane (for påkrevet valg av websted scanning)
I mange tilfeller vil dette være det eneste feltet du trenger å fylle ut
Følgende valg
Sesjonsvariabler i lenker
Ta i betraktning at intern filbane er sensitiv ved valg av store og små bokstaver
Dersom du vet at verten kjører Windows, kan et hende du vil at dette skal være umarkert
Søkemotor feilsider (som f.eks. response code 404)
Dette kan være nyttig i noen skjeldne tilfeller, typisk med innholdsstyrings system og liknende
Verifiser eksistens av eksterne lenker
Verifisering av eksterne lenker kan gjøre scanneprosessen av websted tregere dersom det er mange døde lenker
Tillat omdirigeringer
Med dette valgt, kan du også vise alle omdirigeringer som ble oppdaget via websted scanning.
Tillat informasjonskapsler
Maksimum simultane forbindelser
Flere simultane forbindelser er ikke alltid raskere. Hastigheten på forbindelsen mellom deg og serveren er viktig.
Avanserte instillinger
Dersom resultatet fra websted scanning er "rar" (pga webserver forbindelse og last), prøv å øke verdier på tidsavbrudd
Tracking and storage of extended website data (uncheck for large sites)
Saving extended website data increases memory usage and can hurt crawler performance
Usage of external tools
Logging of progress
User agent ID
Some websites may return different content depending on crawler / user agent.
Login (usually username / password)
Supports "basic authentication" and "POST" forms. Remember to allow cookies.
Send "basic authentication" headers
"POST" method will automatically be used if login path is filled
When "unpressed" the maximum can exceed the recommended range. Use this option with caution! (Registered users only)
Username
Password
User field (post)
Pass field (post)
Hidden fields / values (post)
Login path (post)
Follow modes
Store email links in memory
If checked, you can view, but not save to disk, email addresses found
Create log file of website scans
Placed in program user data directory "logs - misc"
Store content of all pages in memory
Useful if you plan to view all files and pages scanned and want loading to be faster
Store reponse headers text for all pages
Validate HTML using W3C validator (select a number above 0 to enable)
Set the maximum amount of simultaneous connections to use with this tool
Search all link tag types
Extend search to include: <img src="">, <script src="">, <link href=""> etc.
Try search for links in Javascript and CSS (simple)
Extend search to: <a onclick="window.open()"> etc.
Try search for links in Javascript (extended)
This will attempt to find and guess links in all script sections and functions
Download "robots.txt"
Always download "robots.txt" to identify as a crawler/robot
Obey "robots.txt" if found
This file is often used by webmasters to "guide" crawlers/robots
Obey meta tag "robots" noindex
Obey meta tag "robots" nofollow
Ignore "dynamic circular" generated internal links
Ignore links such as "http://example.com/?paramA=&amp;paramA" etc. (e.g. dynamic pages using own page data to build new pages)
Max characters in internal links
Cutout session variable in internal links
Sessions variables are sometimes inserted into links: "examplel.jsp;jsessionid=xxx" or "example.php?PHPSESSID=xxx"
If left empty, crawler will try names such as "jsessionid" and "PHPSESSID"
Cutout "?" (get parameters) in internal links
Removes "?" in links and thereby also determines if "page.htm?A=1" and "page.htm?A=2" are considered to be "page.htm"
Cutout "#" (address within page) in internal links
Determines if "page.htm#A" and "page.htm#B" are considered to be the same page
Cutout "#" (address within page) in external links
Determines if "page.htm#A" and "page.htm#B" are considered to be the same page
Consider <iframe> tags for links
<iframe> is always considered "source". However, sometimes also as "link" can be useful
Root path aliases
Used to cover http/https/www variations and addresses mirroring / pointing to the same content
Limit final sitemap output to certain directories
Leave empty to use website root path.
Beyond website root path, initiate scanning from paths
Useful in cases where the site is not crosslinked, or if "root" directory is different from e.g. "index.html"
Save files crawled to disk directory path
Scan pages with file extension
Directories are always scanned
Case sensitive comparisons
Determines if all filters are case sensitive (e.g. if ".extension" also matches ".EXTENSION")
List files with file extension in output
Leave empty to include all
Case sensitive comparisons
Determines if all filters are case sensitive (e.g. if ".extension" also matches ".EXTENSION")
Allow and crawl internal links that match certain paths
Limits crawling to paths within allow list filter (use relative paths such as "dir/"). Leave empty to use website root path
Disallow and ignore links that match certain strings or paths
For simple matches, write e.g. "?". To specify a path relative to root: ":myfolder/" or ":myfolder/*" (ignores all subpaths)
Webmaster crawler filters
Crawler "traps" detection
Default storage file name
If urls have too long file names to be saved to disk, they are named "<default>0, <default>1, <default>2" etc.
Website links structure
Change active selection
Items
Found items
R.Code
HTTP response code
Path
Path part
R.Time
Response time (miliseconds)
D.Time
Download time (miliseconds)
F.Size
File size (KB)
F.Score
File score (weighted link values)
F.Scaled
File score scaled (0-10)
V.Errors
Errors encountered during validation
Address
User agent
You can type an address here and use the "Refresh" button (or hit the "Enter" key)
Leave empty to use default. Otherwise enter another user agent ID here (some websites may return different content)
Choose from quick list
Edit quick list
Add to quick list
Sitemap
Internal
External
Emails
Collected data
View file
View source
W3C validate HTML
W3C validate CSS
Page data
Linked to by
Used as source by
Redirected to by
Directory summary
Links found
Response headers
Title
Save
Response code
Test
Importance score scaled
Incoming links weighted and transformed to a logarithm based 0-10 scale
Fetch
Google PageRank
Fetch
Estimated change frequency
Calcluation based on "importance score" and some HTTP headers
Fetch
Last modified
This checks "file last changed" for local files and server response header "last-modfied" for HTTP
Test
Save
Sub address
Save
Part address
Full address
Save
|If the file resides on local disk, you can use menu "File - Update File" to save changes
No page to validate
No page to validate
Select a page or phrase to activate embedded browser
Select which file, containing search engine retrival details, you wish to use (only one)
Select which files, containing position engine retrival details, you wish to use
Select which files, containing suggestion engine retrival details, you wish to use
Type the address of one or more websites. <SelectedSite> and <SelectedPage> are only relevant when having used "Website scan"
Type the phrases you want to position check selected sites against
Search engines
Sites to compare
Phrases to check
Either select a page in the "website tree view" to the left or enter one in the "Address" textbox underneath it
Phrase
Count 0
%
Weight 0
%
When pressed, keywords page data will no longer be updated automatically
Stop words
Text weight in elements
The values indicate relative importance. 0 means no text is extract from it.
Title text <title></title>
Header text <hx></hx>
Difference between "normal" and "header" weight is spread out so <h1> gets full value and <h7> only 1/7
Anchor text <a></a>
Normal text
Image alternative text
Meta description / keywords
Google (data centers and options)
Not used by website crawler. Only used other places when requested
Decide hosts to retrieve data from, e.g. "http://www.google.com/" (if multiple, retrieved data will sometimes be averaged)
Enable usage of PageRank checking functionality
How many words in phrases
Limit the keywords list to a fixed number of characters. "*" = all. "#" = counts characters and allows editing.
Limit the text size to a fixed number of characters. "#" = counts characters and allows editing.
* uses the phrases in "configuration". # and ranges use sentences from "Page [keywords]". # is the selected.
Besides being ignored in totals, stop words also "end current phrase" when encountered by scanner (use "Refresh" to update)
Input comes from listed "Phrases / words".
Online tools
Add or edit files with stop words
Navigate to selected address
Should item be shown in "quick" tabs
Add or edit files with "online tools" configuration
Write or select an address
Page [keywords]
Page [input]
Keywords [page]
Positions [analysis]
Positions [check]
Positions [history]
Keywords [explode]
Keywords [suggest]
Retrieve all data for phrase that match filters (if empty phrase, dropdown box underneath will be filled with all available)
Separate each tumbler with an empty line <NL>, e.g. Tumbler1Word1<NL>Tumbler1Word2<NL> <NL> Tumbler2Word1<NL>Tumbler2Word2<NL>
Enter one more more phrases. Use the underneath tools and quickly explode keyword lists
Uses all phrases from selected word count in "Phrases / words"
Input text here to keyword density check (used when no page / url has been selected)
Engine and depth to check
Positions to check
Cancel position check
Start position check
Save results to position check history data (used for graphs etc.)
Have "pressed" to show hints and warnings before fetching results
Build now
Generate a sitemap of the selected type (e.g. Google XML sitemap)
Build all
Build all kinds of sitemaps supported
Quick presets...
View various configuration examples (can speed up creating new projects)
Core options
Path options
Document options
Content options
Builder options
Template options
Template code
Sitemaps protocol options
Sitemap builder mode (the kind of sitemap to generate)
Decide which kind of sitemap you want to generate
XML sitemap file output path (Google created XML sitemaps protocol)
RSS sitemap file output path
Text sitemap file output path (one url per line)
Template sitemap file output path (usually HTML or custom format)
ASP.net Web.sitemap file output path (for .Net navigation controls)
Items as linked descriptions
Prefer <title></title>
Prefer raw paths
Prefer beautified paths
Auto detection is based on root path type
Beautified paths
Convert separators to spaces
Upcase first letter in first word
Upcase first letter in all follow words
Directories as items in own lists
Ignore
Item (normal)
Directories as headlines
Ignore
Prefer path
Prefer path (linked)
Prefer path + title (linked)
Set path options used in sitemap
Links use full address
Override and convert slashes used in links
Layout
Columns count
with value above 0, links will be spread among columns
Set path root used in sitemap
Can be useful if you e.g. scanned "http://localhost" but the sitemap is for "http://www.example.com"
Add relative path to "header root link" in template sitemaps
Have e.g. "index.html" (http://example.com/index.hml) instead of "" (http://example.com/) as the "root header link" in sitemap
Character set and type
Always save as UTF-8
If you need to support "non-western" international users
Item response codes allowed in generated sitemaps
Control
Disable "template code" for empty directories
Useful in some cases such as avoiding <ul></ul> (which fails W3C HTML validator)
Enable root headline "template code"
If checked, the "Code : Root headline ..." will, together with the "root directory", be inserted underneath "Code : Header"
Convert from characters to entities in urls and titles
"&" to "&amp;", "<" to "&lt;", ">" to "&gt;" etc.
Using calculated values
"Scan website" calculates various values for all pages. You can view and edit these in "Website data"
Prevent "extreme" <priority> and <changefreq> values
Influences the conversions done from calculated values into Google XML sitemap ranges
Override calculated values
Override Priority
Instead of automatic calculation, give all pages same priority (setting it to a minus value "-" leaves out the tag)
Priority is also used as fallback for setting "change frequency"
Override ChangeFreq
Instead of automatic calculation, give all pages same change frequency (setting it to a star "*" leaves out the tag)
Override LastMod
Instead of automatic calculation, give all pages same last modification value (setting it to Dec 30th 1899 leaves out the tag)
Add XML necessary for validation of generated sitemap file(s)
Google sitemap file options
Apply Gzip to a copy of the generated sitemap file(s)
This "gzip" copy will have name "sitemap.xml.gz" if the original name is "sitemap.xml"
Maximum numbers of urls in each sitemap file
Code : Header
Code : Footer
Code : Start of headline before start of directory
Code : End of headline before start of directory
Code : Start of directory
Code : End of directory
Code : Before headline / directory combination
Code : After headline / directory combination
Code : Start of item link address start
Code : Start of item link address end
Code : End of item link title start
Code : End of item link title end
Code : Start of headline link address start
Code : Start of headline link address end
Code : End of headline link title start
Code : End of headline link title end
Code : Column start
Code : Column end
Code : Root headline start
Code : Root headline end
Upload now
Upload all
Quick presets...
View various configuration examples (can speed up creating new projects)
FTP upload
Ping notify
Host and port number
Upload path
Connection mode
Transfer mode
Username
Password
Addresses to ping
Some services support notifications simply by requesting a specific address
File information
Path:
Saved with version:
Date information
Project created:
Project last modified:
Dynamic help
Navigate embedded browser (Internet Explorer) to selected address
Open the page in IE (may give a better viewing experience)
Open the page in Firefox (may give a better viewing experience)
Open the page in Opera (may give a better viewing experience)
Tumble to input
Clear tumblers
Add to input
Replace input
Clear input
Add to output
Replace output
Clear output
Word order
Cover word permutations such as "tools power" (instead of "power tools")
Missing space
Cover typo errors such as "powertools" (instead of "power tools")
Missing letter
Cover typo errors such as "someting" (missing "h")
Switched letter
Cover typo errors such as "somehting" ("h" and "t")
Tidy
In output: Trim for superfluous spaces
No repetition
In output: Avoid immediate repeating words in same phrase (e.g. "deluxe deluxe tools")
No duplicates
In output: Remove duplicated phrases (e.g. if two "power tools")
Suggest related
Uses input to suggest related phrases
Cancel suggest
Show information
Have "pressed" to show hints and warnings before fetching results
Filter
In output: Accept phrases that only contain #32 (space) and characters in filter text. If no text, character/word count is used
Prepend
Copy and add all phrases with text inserted at beginning
Append
Copy and add all phrases with text added to the end
Tools
Input
Output
Tumblers
Input
Output
Retrieve all data for phrase that match filters (if no phrase selected, dropdown box will fetch all available)
Include data from date
Include data till date
Select search engines
Select websites
Set whether to show. Checked = show if available data. Green = has available data. Red = has no data for active phrase.
Switch between e.g. normal and logarithmic scale
Switch between e.g. show and hide legend
Switch between e.g. show and hide marks
Save as image
Get
Website Download now : Download and store complete websites for later viewing. Perfect for researchers, travellers and dial-up users.
- Website download can be automated through command line.
- Can handle and find links in CSS as well as most Javascript files.
- Fast website downloader with options for connections, timeouts, crawler filters etc.
This language file is part of Website Download / Website Analyzer / Keyword Research / Sitemap Generator. All rights reserved. See legal.