A1 Keyword Research

: Search engine optimization (SEO) and keyword marketing including pay per click (PPC) website tools




NOTE: If you are doing translations, you only need to translate the .ini files.



ser-rs
Srpski (RS)
Goran Nešić
cyrusxxxx#nadlanu.com
http://www.readyonpcshop.com
Hvala Thomas-u Schulz-u na ovom predivnom programu.
Sugestije / predloge ya srpski prevod šaljite na gore navedenu mail adresu.
2009-04-01 15:52:48
A1 Sitemap Generator
1.8.5


Izračunaj {dynCount}
Veličina {dynCount}
{dynEngine} # povezano
{dynEngine} # pretraženo
{dynEngine} # pogodak
Obrada podataka (može potrajati)
Veza
Izvor
direktno
Nepoznato
povezano od {dynCount} stranica
linkovi ka {dynCount} stranicama
Iskorišćeno od {dynCount} stranica
Locirano {dynCount}
Opis
Kod odgovaranja
Vrsta
Pozvan od
preusmeren od
Lociran {dynCount}
Fraza
Vebsajt
Veličina
Broj
Broj
nema stop reči
Strana
Sajt
ugradjeno
Korisnik napravio
Some search engines have policies and measures against positon checking.<NL><NL>Remember that it is possible to edit and add new search engines to check.<NL>It is also possible to configure how to search, e.g. idle time between requests etc.<NL>This program enforces some idle time between requests time to prevent "overloading" search engines.<NL><NL>All search engines are checked concurrently. This means that "idle" time have little effect on overall speed.<NL><NL>Position check now?
Some search engines have policies and measures against positon checking.<NL><NL>Remember that it is possible to edit and add new search engines you can check against.<NL>It is also possible to configure how to search, e.g. idle time between requests etc.<NL>This program enforces some idle time between requests time to prevent "overloading" search engines.<NL><NL>Analyze top positions on selected search engine now?
Some services have policies and measures against non-manual usage.<NL><NL>Remember that it is possible to edit and add new services to check.<NL>It is also possible to configure usage, e.g. idle time between requests etc.<NL>This program enforces some idle time between requests time to prevent "overloading" services.<NL><NL>All services are checked concurrently. This means that "idle" time have little effect on overall speed.<NL><NL>Retrieve suggestions now?
Ovaj fajl je imenovan koristeci drugačiji patent.<NL><NL>Normalan naziv je u formatu "{dynDirFilePathName}".<NL><NL>Koristi ovaj fajl?
Treba da izaberete ili unesete adresu veb strane u "Aktivna adresa".
Treba da unesete ili izaberete kljucnu frazu u "Aktivna fraza".
Treba da unesete jednu ili više adresa u "Adrese za naći".
Treba da unesete jednu ili više fraza u "Fraze za proveriti".
Morate prvo da skenirate web sajt.<NL><NL>Kada čuvate ili ucitavate projekat, ovo takodje ukljucije i sve podatke povezane i skinute sa njega.
Izlazna adresa mape sajta "{dynDirFilePathName}" ne postoji.<NL><NL>Pokušsajte da napravite novi direktorijum?
Nije mogao biti napravljen "{dynDirFilePathName}".<NL><NL>Ovo je izazvano usled manjka administratorskih dozvola ili prava.
Koren direktorijuma "{dynDirFilePathName}" mora da se završi sa crticom (slash). Koristi unetu adresu?<NL><NL>Kliknite na "DA" da dodate crticu (slash) na root direktorijumu koji ste uneli.<NL>Kliknite na "NE" da dodate stazu kao "početna strana za skeniranje" i da posle odsečete ostale direktorijume oko nje.
Izgleda da "staza korena" izabrana nije ulistana u pokazivački fajl "koren pokazivačkih fajlova" listu.<NL><NL>Obriši stare vrednosti?
Analizirano / Nadjeno
URLs
Mapa Sajta
Interna
Eksterna


Novi Projekat
Otvori projekat...
Otvori projekat
Ctrl+O
Sačuvaj projekat
Sačuvaj projekat
Ctrl+S
Sačuvaj projekat kao...
Updejtuj fajl
Updejtuj fajl
Podešavanja štampača...
Štampaj
Štampaj
Ctrl+P
Izlaz
Eksportuj izabrane podatke kao...
Export data in selected control into a file
Importuj url-s iz fajla...
Importuj web sajt iz fajla
Odseci
Odseci obeleženi tekst
Ctrl+X
Kopiraj
Kopiraj izabrani tekst
Ctrl+C
Nalepi
Nalepi tekst
Ctrl+V
Obriši
Obriši izabrani tekst
Ctrl+Del
Izaberi sve
Izaberi sav tekst
Ctrl+A
Povrati
Povrati zadnju promenu
Ctrl+Z
Traži...
Pronadji tekst
Ctrl+F
Pronadji sledeće
Pronadji sledeće
F3
Zameni sledeće...
Zameni tekst
Ctrl+R
Izmeni word wrap
Dodaj stavku
Dodaj stavku posle selektovanja
Ctrl+Alt+R
Dodaj red
Dodaj red pre selektovanja
Ctrl+Alt+I
Add child
Add child in selected
Ctrl+Alt+C
Izbriši stavku
Izbriši selektovanu stavku
Ctrl+Alt+E
Pomeri stavku na gore
Pomeri izabranu stavku na gore
Ctrl+Alt+U
Pomeri stavku na dole
Pomeri izabranu stavku na dole
Ctrl+Alt+D
Složi
Složi
Raširi prikaz
Umanji prikaz
Prioritetno
Pogledaj prioritetnu stranu
Shift+Ctrl+Z
Sledeće
pogledaj sledeću stranu
Shift+Ctrl+X
Osveži
Osim kada editujete izvor (source), ovo čisti cashe i osvežava sadržaj
F5
Navodi pregledač
Poveži "Prioritetni, Sledeće i Osveži" sa usadjenom kontrolom (npr. prozor IE)
Ctrl+Alt+N
Nadjene stavke
Kod odgovora
Vreme Odgovora
Vreme Preuzimanja
Veličina fajla
Tip MIME
Matrica slovnih karaktera fajla
Fajl modifikovan
Povezano od
Interni linkovi
Eksterni linkovi
Kod važnosti izračunat
Skala koda važnosti
Greške u validaciji HTML-a
Greške u validaciji CSS-a
Naziv strane
Opis strane
Online alati
Može biti dobar dodatak u ugradjenim SEO alatima
Ctrl+Alt+T
Internet Explorer
Opera
FireFox
Sve_Videti_uModuSaStrane
Promeni ako je websajt pronadjen u "stablo" ili "listu" formate
Formatiraj i obriši prazan prostor
Obriši HTML komentare
Obeleži sintakse u fajlu
otvori podatke posle skeniranja web sajta
Uvek otvori mapu sajta
Sačuvaj analizirane podatke samo kao XML
Ako želite da koristite neke slične programe na ovom projektu neka ova opcija ostane ne čekirana.
Brzo skladištenje analiziranih podataka
preporučeno osim ako ne želite da pogledate/editujete XML fajlove
Uključuje i podatke o analizi web sajta
Korisno ako želite da kasnije učitate projekat i vidite podatke učitane prilikom pretrage sajta (website crawl)
Ograniči analizu mape sajta "mapa sajta"
Interesantno ako ste zainteresovani samo za podatke unutar "mape sajta"
Linkovi "umanji"
Ako je link ka stranici pronadjen višestruko, "link juice" postaje umanjen za svaki link
Link "noself"
Ako strana sadrži link ka samoj sebi, ti linkovi su ignorisani
Eksportuj CSV fajlove sa zaglavljima
Eksportuj CSV fajlove sa url-ovima
Pročitaj me...
Pogledaj pročitaj me fajl
Dnevnik programa
Pogledaj dnevnik programa
Pomoć...
Pogledaj omogućenu pomoć
F1
Preporuke...
Pogledaj preporuke
Nadogradnja...
Proveri za nadogradnju
Kupi...
Unesi kod...
Pošalji povratnu informaciju
Poseti web sajt
O programu...
A1 Sitemap Generator
Početna stranica za Sitemap Generator
A1 Website Analyzer
Početna stranica za website analyzer tool
A1 Keyword Research
Početna stranica za keyword research tool
A1 Website Download
Početna stranica za website download tool


Struktura
Stilovi
Povrati osnovna podešavanja
Google Video Mapa sajta
Pomoć o programu (disk)
http://www.primer.com
http://primer.localhost
HTML postavka mape sajta : HTML
HTML postavka mape sajta : XML / XSL
HTML postavka mape sajta : PHP
HTML postavka mape sajta : CSV
XML Sitemap : XML Protokol mape sajta (Google Sitemaps)
Povrati osnovna podešavanja
primer.com
&amp;Fajl
&amp;Edit
&amp;Tabela
&amp;Pogledaj
&amp;Alati
&amp;Opcije
&amp;pomoć
ponovo otvori projekat
Ponašanje programa
Sačuvaj/Učitaj projekat
URL algoritam važnosti
Data unesi/iznesi


Skeniraj web sajt
pogledaj web sajt
Analiziraj web sajt
Istraži fraze
Napravi mapu sajta
Napravi robots.txt
Pogledaj fajlove
Uploaduj fajlove
Pinguj mapu sajta
Info o projektu
tsDevCtrls
Recrawl
Ukljucite kada želite da "recrawl" web strane (koristi već postojeće podatke od skeniranja ako ih ima)
Nastavi
Čekiraj kada želite da nastavite "nastavi" web sajt crawl (oristi već postojeće podatke od skeniranja ako ih ima)
Pokreni skeniranje
Zaustavi skeniranje
brze postavke...
Pogledaj razne primere podešavanja (može ubrzati stvaranje novih projekata)
Staza
Stanje skeniranja
Crawler opcije
Crawler engine
Identifikacija Crawler-a
Download opcije
Webmaster filteri
Filteri analize
Filter liste
Prikupljanje podataka
Eksterni alati
Filteri preuzimanj
Početna staza fajla
Ako crawler naidje na nepoznat link / referencu "sadržaj" vraća se na osnovna podešavanja
HTTP (internet i localhost)
Local disk / UNC / Local Area Network
Auto detektovanje
Auto detektovanje je zasnovano na putanji korena direktorijuma
HTTP proxy podešavanja
U većini slučajeva nožete ignorisati proxy podešavanja.
DNS Ime / IP adresa
Port
Korisničko ime
Šifra
Sačekati od 1/1000 sekundi pre nego što "konekcija" istekne
Sačekati od 1/1000 sekundi pre nego što "čitanje" istekne
Sacekati od 1/1000 sekundi izmedju propalog i novog pokušaja za uspostavljanje konekcije
Broj pokušaja za uspostavljanje veze pre odustajanja
Default to GET for page requests (instead of HEAD followed by GET)
U zavisnosti od web sajta i web servera može doći do razlike u performansama izmedju ova dva izbora
Osnova ka upornim konekcijama
Uporne konekcije mogu biti prednost kod web servera koji ne trpe monogo konektovanja i diskonektovanja
Prihvati- header sa jezikom za slanje (ako je prazno, ništa se ne šalje)
Skladišti preusmeravanja, linkove od i ka svim stranicama, statistiku itd.
Koristi se za gledanje gde su fajlovi linkovani i preusmereni od.Ovi podaci se takodje koriste za poboljšanje mnogih proračuna
Skladišti eksterne linkove
Ponekad je korisno za videti "eksterne" linkove
Skladišti nazive za sve stranice (koristi malo memorije)
Skladišti "meta" opise za sve stranice
Web sajt root direktorijum (traženo zbog opcije skeniranja)
U većini slučajeva ovo je jedino polje koje ćete morati da popunite
Prati podešavanja
Opcije i korisnički promenljive u linkovima
Promenljive u sesijama mogu biti i unutar url-ovima: "demo.php;sid=xxx" or "demo.php?PHPSESSID=xxx". Provere su gotove kucajte pažljivo
Uzmi u obzir eksterne linkove kucaj pažljivo
Ako znate da host koristi Windows, možda biste želeli da odčekirate ovo
Crawl strane sa greškama (response code 404)
Ovo može biti korisno u nekim retkim slučajevima, tipično za sisteme sa menadzmentom sadržaja i slično
Verifikuj postojanje eksternih linkova
Verifikovanje eksternih linkova može usporiti skeniranje web sajta ukoliko ima dosta mrtvih linkova
Dozvoli preusmeravanja
Sa ovom opcijom uključenom možete videti sva preusmeravanja koja su se desila tokom skeniranja web sajta
Dozvoli kolačiće
Dozvoli GZip/kompresiju zbog prenosa podataka
Maksimum uporednih konekcija
Više uporednih konekcija neće uvek biti brže. Konekcija izmedju vas i servera je vrlo bitna
Napredna Engine podešavanja
Skladištenje i pretraga externih podataka o web sajtu (odčekirati za velike sajtove)
Skladištenje eksternih podataka o web sajtu može naškoditi crawler-ovim performansama
Korišćenje eksternih alata
Dnevnik napretka
User agent ID
Neki web sajtovi mogu vratiti drugačiji sadržaj u zavisnosti od crawler-a / user agent-a.
U retkim slučajevima koristite drugačije agent ids.
Uloguj se (korisnik / šifra)
Podržava "osnovnu identifikaciju" i "posting" forme. Zapamtite da dozvolite kolačiće.
Šalji HTTP headere po svakom zahtevu (osnovna autentikacija)
Kada "nepretisnuto" maksimum ce preći preporučeni raspon. Koristi ovo sa oprezom! (Samo za registrovane korisnike)
Korisnik
Šifra
Korisničko polje (post)
šifra (post)
Više "key=value" parametara (post)
Login staza (post)
Prati modove
Skladišti e-mail linkove u memoriji
Kada uključeno možete videti ali ne i sačuvati pronadjene e-mail adrese
Napravi dnevnik skeniranja (usporava web sajt crawler-a)
Smešteno u programski direktorijum "logs - misc"
Skladišti sadržaj svih stranica u memoriji (zahteva dosta memorije)
Korisno ako želite da vidite sve fajlove i stranice, takodje učitavanje će ići brze
Skladišti reponse headers za sve stranice
Validiši HTML koristeći W3C validator (izaberi 0 ako želite da aktivirate ovo)
Izaberite maksimalni broj uporednih konekcija za upotrebu sa ovim alatom
Validiši CSS koristeći W3C validator (izaberi 0 ako želite da aktivirate ovo)
Izaberite maksimalni broj uporednih konekcija za upotrebu sa ovim alatom
Pretra\i sve tag linkove
Proširi pretragu i uključi: &lt;img src=""&gt;, &lt;script src=""&gt;, &lt;link href=""&gt; itd.
Pretraži sve &lt;form&gt; i povezane tagove
Proširi pretragu i uključi: &lt;form&gt;, &lt;input&gt;, &lt;select&gt; itd.
Uvek skeniraj direktorijume koji sadrže povezane URL-ove
Ovo podešavanje osigurava da su direktorijumi uvek skenirani, iako u sebi ne sadrže direktne linkove
Automatski popravi URL-ove sa osnovnim definisanim portovima
Example: Ako skenira "http://primer.com/", websajt crawler će takođe prihvatiti "http://primer.com:80/" kao unutrašnji
Osiguraj URL "stazu" da je procentualno uobličena
Osiguraj URL "filter" da je procentualno uobličena
Pokušaj pretragom Javascript-i i CSS (prosto)
Proširi pretragu na: &lt;a onclick="window.open()"&gt; itd.
Pokušaj pretragom za linkovima u Javascript-ama (prošireno)
Ovo će pokušati pronaći sve linkove u script-ama i funkcijama
Primeni "webmaster" i "izlistaj filtere posle završetka skeniranja web sajta
Lišava URL-ove blokirane od "robots.txt", "noindex" i programski konfigurisanih "izlaznih filtera"
Preuzmi "robots.txt"
Uvek preuzmi "robots.txt" da se identifikuje kao crawler/robot
Potčini se "robots.txt" ukoliko je fajl nađen
Ovaj fajl je obično korišćen da se "navode" crawlers/robots
Podčini se "meta" tag-ovima "robots" noindex-ima
Potčini se "meta" tag-ovima "robots" nofollow
Potčini se "a" tag-ovima "rel" nofollow
Potčini se "link" tag-ovima "rel" canonical
Ignoriši "dynamic circular" generisane interne linkove
Ignore links such as "http://example.com/?paramA=&amp;amp;paramA" etc. (e.g. dynamic pages using own page data to build new pages)
Max characters in internal links
Cutout session variables in internal links
Cutout "?" (GET parameters) in internal links
Removes "?" in links and thereby also determines if "page.htm?A=1" and "page.htm?A=2" are considered to be "page.htm"
Cutout "#" (address within page) in internal links
Determines if "page.htm#A" and "page.htm#B" are considered to be the same page
Cutout "#" (address within page) in external links
Determines if "page.htm#A" and "page.htm#B" are considered to be the same page
Correct "\" when used instead of "/" in internal links (only applied in HTTP scan mode)
Corrects e.g. "folder\sub" to "folder/sub" in all links (only applied in HTTP scan mode)
Correct "//" when used instead of "/" in internal links
Corrects e.g. "folder//sub" to "folder/sub" in all links
Consider &lt;iframe&gt; tags for links
&lt;iframe&gt; is always considered "source". However, sometimes also as "link" can be useful
Root path aliases
Used to cover http/https/www variations and addresses mirroring / pointing to the same content
Limit list of internal URLs to those within a "relative path" in list
Limit list to paths within allow list (use relative paths such as "dir/"). Leave empty to use website root path
Exclude list internal URLs that match item in "relative path / string / regex" list
Text string matches: "mypics". Path relative to root: ":mypics/", subpaths only: ":mypics/*", regex search: "::mypics[0-9]*/"
Beyond website root path, initiate scanning from paths
Useful in cases where the site is not crosslinked, or if "root" directory is different from e.g. "index.html"
Scan data
Scan state :
Time used :
Internal "sitemap" URLs
Listed found :
Listed deduced :
Analyzed content :
Analyzed references :
External URLs
Listed found :
Internal "outside" URLs
Listed found :
Listed deduced :
Analyzed content :
Save files crawled to disk directory path
Limit analysis of internal URLs to those with "MIME content type" in list
Analysis will also be done if no MIME type returned
Limit analysis of internal URLs to those with "file extension" in list
Directories are always analyzed
Case sensitive comparisons
Determines if all filters are case sensitive (e.g. if ".extension" also matches ".EXTENSION")
Limit list of internal URLs to those with "file extension" in list
Leave empty to include all. Directories are always included
Case sensitive comparisons
Determines if all filters are case sensitive (e.g. if ".extension" also matches ".EXTENSION")
Limit analysis of internal URLs to those within a "relative path" in list
Limit analysis to paths within allow list. Use relative paths, e.g. "dir/" and "dir/file.htm". Leave empty to use website root path
Limit analysis of internal URLs to those below depth level
Depth level: "-1" = no limits. "0" = root domain/directory. "1", "2", "3" ... = all paths below chosen depth level.
Website directory path depth level
General settings for options in "analysis filters"
Exclude analysis internal URLs that match item in "relative path / string / regex" list
Text string matches: "mypics". Path relative to root: ":mypics/", subpaths only: ":mypics/*", regex search: "::mypics[0-9]*/"
Webmaster crawler filters
Website "crawler traps" detection
Website links structure
Path
Path part
Items
Found items
R.Code
HTTP response code
R.Time
Response time (miliseconds)
D.Time
Download time (miliseconds)
Size
File size (KB)
MIME
MIME content type
Charset
Character set and encoding
Modified
Last modified date/time returned through HTTP header or meta tag
Linked
Incoming links found within website
L.Internal
Internal links on page
L.External
Outgoing external links on page
Importance
File score (calculated from weighing all links across entire website)
I.Scaled
Importance score scaled (0-10)
Desc
E.HTML
HTML validation errors
E.CSS
CSS validation errors
Title
You can type an address here and use the "Refresh" button (or hit the "Enter" key)
Leave empty to use default. Otherwise enter another user agent ID here (some websites may return different content)
Sitemap
Internal
External
Emails
Collected data
View file
View source
W3C validate HTML
W3C validate CSS
Page data
Links [internal]
Links [external]
Linked by
Uses [internal]
Uses [external]
Used by
Redirected from
Directory summary
Response headers
Title
Save
Response code
Test
Importance score scaled
Incoming links weighted and transformed to a logarithm based 0-10 scale
Fetch
Crawler state flags
Save
Content downloaded
Analysis required
Analysis started
Analysis finished
Analysis content done
Analysis references done
Detected "robots.txt"
This covers "list" filter and robots.txt
Detected "meta robots noindex"
Detected "meta robots nofollow"
Detected "link robots canonical"
Detected "do not list" filter
This covers "list" filters and "webmaster" filters
Detected "do not analyze" filter
This covers "analysis" filters and "webmaster" filters
Google PageRank
Fetch
Estimated change frequency
Calcluation based on "importance score" and some HTTP headers
Fetch
Last modified
This checks "file last changed" for local files and server response header "last-modfied" for HTTP
Test
Save
Sub address
Save
Part address
Full address
Save
Redirects to
|If the file resides on local disk, you can use menu "File - Update File" to save changes
No page to validate
No page to validate
Select a page or phrase to activate embedded browser
Google (data centers and options)
Not used by website crawler. Only used other places when requested
Decide hosts to retrieve data from, e.g. "http://www.google.com/" (if multiple, retrieved data will sometimes be averaged)
Enable usage of PageRank checking functionality
Select which file, containing search engine retrival details, you wish to use (only one)
Select which files, containing position engine retrival details, you wish to use
Select which files, containing suggestion engine retrival details, you wish to use
Type the address of one or more websites. &lt;SelectedSite&gt; and &lt;SelectedPage&gt; automatically adds "Active address."
Type the phrases you want to position check selected sites against. &lt;SelectedPhrase&gt; automatically adds "Active keyword phrase".
Either select a page in the "website tree view" to the left or enter one in the "Address" textbox underneath it
Phrase
Count 0
%
Weight 0
%
Lock analysis results
When pressed, keywords page data will no longer be automatically updated when "Active address" changes
The values indicate relative importance. 0 means no text is extract from it.
Title text &lt;title&gt;&lt;/title&gt;
Header text &lt;hx&gt;&lt;/hx&gt;
Header &lt;h1&gt; weighs most. If "normal text" is 1 and "header text" is 3: H1 = 1 + 6/6 * (3-1), H6 = 1 + 1/6 * (3-1)
Anchor text &lt;a&gt;&lt;/a&gt;
Normal text
Image alternative text
Tag attribute "title"
Meta description
Meta keywords
Split and show phrases with specific number of words. "*" shows all phrases with 1 to 5 words.
Limit the keywords list to a fixed number of characters. "*" = all. "#" = counts characters and allows editing.
Limit the text size to a fixed number of characters. "#" = counts characters and allows editing.
<#32>, (comma)
Insert a comma between all keywords
<#32>\s (space)
Insert a space between all keywords
<#32>\n (newline)
Insert a new line between all keywords
Extract keywords and analyze density
Raw text input
Keyword list output
Tools
Text weight in elements
Stop words filter
Besides being ignored in totals, stop words also "end current phrase" when encountered by scanner (use "Refresh" to update)
Input comes from listed "Phrases / words".
Online tools
Add or edit files with stop words
Navigate to selected address
Should item be shown in "quick" tabs
Add or edit files with "online tools" configuration
Write or select an address
Page [keywords]
Positions [analysis]
Positions [check]
Positions [history]
Keywords [explode]
Keywords [suggest]
Type or select a keyword or phrase
Combine keyword lists. Write each keyword phrase on a line. Separate all keyword lists with an empty line between
Enter one more more phrases. Use the underneath tools and quickly explode keyword lists
Input text here to keyword density check (used when no page / url has been selected)
Analyze search results
Analyze search results
Stop analyze
Stop analyze
Show information
Have "pressed" to show hints and warnings before fetching results
Tools
Engine and depth to check
top positions
Stop check
Stop position check
Position check
Start position check
Save to history
Save results to position check history data (used for graphs etc.)
Show information
Have "pressed" to show hints and warnings before fetching results
Tools
Addresses to find
Phrases to check
Search engines
Save presets to textfile
Load presets from textfile
Save presets to textfile
Load presets from textfile
Save presets to textfile
Load presets from textfile
Combine to input
Combine keyword lists (each separated with an empty line) into output
Clear tumblers
Add to input
Replace input
Clear input
Add to output
Replace output
Clear output
Permutate words
Input to output: Cover word permutations such as "tools power" (instead of "power tools")
Missing space
Input to output: Cover typo errors such as "powertools" (instead of "power tools")
Missing letter
Input to output: Cover typo errors such as "someting" (missing "h")
Switched letter
Input to output: Cover typo errors such as "somehting" ("h" and "t")
Tidy
In output: Trim for superfluous spaces
No repetition
In output: Avoid immediate repeating words in same phrase (e.g. "deluxe deluxe tools")
No duplicates
In output: Remove duplicated phrases (e.g. if two "power tools")
Suggest related
Uses input to suggest related phrases
Cancel suggest
Show information
Have "pressed" to show hints and warnings before fetching results
Filter
In output: Filter based on character/word count.
Filter
In output: Accept phrases that only contain #32 (space) and characters in filter text.
Prepend
Input to output: Copy and add all phrases with above text inserted at beginning
Append
Input to output: Copy and add all phrases with above text added to the end
Tools
Input
Output
Analysis
Keyword lists
Input
Output
Visuals
Export
Tools
From/till date
Search engines
Websites
Show data and apply filters
Retrieve all data for phrase that match filters (if no phrase selected, dropdown box will fetch all available)
Update data automatically
Include data from date
Include data till date
Select search engines
Select websites
Set whether to show. Checked = show if available data. Green = has available data. Red = has no data for active phrase.
Change scale in graph
Switch between e.g. normal and logarithmic scale
Show legend in graph
Switch between e.g. show and hide legend
Show marks in graph
Switch between e.g. show and hide marks
Graph as image
Save as image
Build now
Generate a sitemap of the selected type (e.g. Google XML sitemap)
Build all
Build all kinds of sitemaps supported
Quick presets...
View various configuration examples (can speed up creating new projects)
Sitemap file paths
URL options
Document options
XML sitemap options
Template options #1
Template options #2
Template code
Sitemap file kind to build
Decide which kind of sitemap you want to generate
XML sitemap file output path (Google created XML sitemaps protocol)
RSS sitemap file output path
Text sitemap file output path (one url per line)
Template sitemap file output path (usually HTML or custom format)
ASP.net Web.sitemap file output path (for .Net navigation controls)
Items as linked descriptions
Prefer &lt;title&gt;&lt;/title&gt;
Prefer raw paths
Prefer beautified paths
Auto detection is based on root path type
Beautified paths
Convert separators to spaces
Upcase first letter in first word
Upcase first letter in all follow words
Directories as items in own lists
Ignore
Item (normal)
Directories as headlines
Ignore
Prefer path
Prefer path (linked)
Prefer title (linked)
Prefer path + title (linked)
Set path options used in sitemap
Links use full address
Override and convert slashes used in links
Layout
Columns per page
with value above 1, links will be spread among columns
Links per page
0 means all links will be on page 1
If multiple pages in sitemap, link all at bottom
Alternative is to have "start", "prior", "selected", "next" and "end " shown
Set path root used in sitemap
Can be useful if you e.g. scanned "http://localhost" but the sitemap is for "http://www.example.com"
Add relative path to "header root link" in template sitemaps
Have e.g. "index.html" (http://example.com/index.hml) instead of "" (http://example.com/) as the "root header link" in sitemap
Character set and type
Always save sitemap files as UTF-8
Option only has influence on those sitemap types where UTF-8 is optional, e.g. HTML/template sitemaps
Save UTF-8 sitemap files with BOM
Byte-order mark option only has influence when no standard specifies if BOM is to be included or not
Generated sitemap files: Include URLs with response codes:
Generated sitemap files: Options
Remove URLs excluded by "webmaster" and "output" filters
Removes URLs excluded by "output filters", "noindex" and "robots.txt"
Control
Disable "template code" for empty directories
Useful in some cases such as avoiding &lt;ul&gt;&lt;/ul&gt; (which fails W3C HTML validator)
Enable root headline "template code"
If checked, the "Code : Root headline ..." will, together with the "root directory", be inserted underneath "Code : Header"
Convert from characters to entities in urls and titles
"&amp;" to "&amp;amp;", "&lt;" to "&amp;lt;", "&gt;" to "&amp;gt;" etc.
Using calculated values
"Scan website" calculates various values for all pages. You can view and edit these in "Website data"
Prevent "extreme" &lt;priority&gt; and &lt;changefreq&gt; values
Influences the conversions done from calculated values into Google XML sitemap ranges
Override calculated values
Override Priority
Use auto "priority" calculation or set all to same value. Minus "-" removes the tag. Used as fallback for "change frequency".
Override ChangeFreq
Use auto "change frequency" calculation or set all to same value. Star "*" removes the tag.
Override LastMod with chosen date/time
Use auto "last modification" calculation or set all to same value. "Reset" sets value to "Dec 30th 1899" which removes the tag.
Reset
LastMod time zone configuration
Override with GMT timezone modifier
Google sitemap file options
Add XML necessary for validation of generated sitemap file(s)
Apply Gzip to a copy of the generated sitemap file(s)
This "gzip" copy will have name "sitemap.xml.gz" if the original name is "sitemap.xml"
Maximum numbers of urls in each sitemap file
Template navigator
Code : Header
Code : Footer start
Code : Footer navigation start
Code : Footer navigation end
Code : Footer navigation items address start
Code : Footer navigation items address end
Code : Footer navigation items title start
Code : Footer navigation items title end
Code : Footer navigation items spacer
Code : Footer end
Code : Start of headline before start of directory
Code : End of headline before start of directory
Code : Start of directory
Code : End of directory
Code : Before headline / directory combination
Code : After headline / directory combination
Code : Start of item link address start
Code : Start of item link address end
Code : End of item link title start
Code : End of item link title end
Code : Start of headline link address start
Code : Start of headline link address end
Code : End of headline link title start
Code : End of headline link title end
Code : Column start
Code : Column end
Code : Root headline start
Code : Root headline end
Create robots.txt
Path of robots.txt file
Create robots.txt options
Add "disallow" urls based on "website crawler ignore filters"
Useful in some cases such as avoiding &lt;ul&gt;&lt;/ul&gt; (which fails W3C HTML validator)
Add "XML sitemaps autodiscovery"
If checked, the "Code : Root headline ..." will, together with the "root directory", be inserted underneath "Code : Header"
Upload now
Upload all
Quick presets...
View various configuration examples (can speed up creating new projects)
FTP options
Upload progress
Host and port number
Upload directory path
Connection mode
Transfer mode
Username
Password
Obfuscate FTP password between project save and load
Add common pings
View various configuration examples (can speed up creating new projects)
Ping now
Ping options
Ping progress
Addresses to ping
Some services support notifications simply by requesting a specific address
Open selected file in text editor
Open the page in NotePad
Open the page in IE
Open the page in FireFox
Open the page in Opera
File information
Path:
Saved with version:
Date information
Project created:
Project last modified:
Dynamic help
Navigate embedded browser (Internet Explorer) to selected address
Open the page in IE (may give a better viewing experience)
Open the page in Firefox (may give a better viewing experience)
Open the page in Opera (may give a better viewing experience)
Default storage file name
If urls have too long file names to be saved to disk, they are named "&lt;default&gt;0, &lt;default&gt;1, &lt;default&gt;2" etc.
Convert links
Convert to relative links for browsing files offline on local disk
Convert to relative links for uploading to a HTTP webserver / website
No conversion
Auto detection is based on root path type
Download files with file extension (if empty uses "list filters")
Leave empty to use "file extension list/output filters". Useful when you want limit download more than "list filters"
Case sensitive comparisons
Determines if all filters are case sensitive (e.g. if ".extension" also matches ".EXTENSION")



Get Keyword Research now : Analyze keyword density, position checking, competition in SERPS, suggest and explode keywords for PPC campaigns etc.




This language file is part of Sitemap Generator / Keyword Research / Website Analyzer / Website Download. All rights reserved. See legal.