How to Analyze Log Files for SEO Opportunities

If you have been doing SEO for a while, you already know Google Search Console does not tell you everything. It gives you a filtered, sampled, Google-only view of what is happening on your site. Crawl tools like Screaming Frog simulate crawler behavior, but they do not reflect what Google actually did on your site yesterday, or last week.

Log files do.

Log file analysis for SEO is one of the most underused technical SEO techniques, even among experienced practitioners. It gives you raw, unfiltered evidence of every request a crawler made to your server, every status code your server returned, and every URL that got crawled or skipped entirely.

This guide walks through what log file analysis actually is, why it matters more in 2026 than it ever did before, how to access your logs across different server setups including CDNs, and exactly how to use log data to find and fix real SEO problems on your site.


What You Will Learn

→ What log files are and how to read them

→ Why log file analysis is more critical in 2026 than ever before

→ How to access server logs across different hosting and CDN setups

→ How to use edge workers to generate logs when your CDN does not provide them natively

→ The most important SEO use cases with specific next steps for each

→ How crawl data improves SEO forecasting

→ How to monitor AI bot behavior from your logs

→ Common mistakes SEOs make with log analysis and how to avoid them


What Is Log File Analysis for SEO?

Log file analysis is the process of downloading your server’s access logs and reviewing them to understand how search engine crawlers, AI bots, and users interact with your site at the HTTP level.

Every time a crawler or a user visits a page on your site, your web server records that request in an access log. This record includes the IP address of the requester, the URL they requested, the timestamp, the HTTP status code your server returned, and the user-agent string that identifies who made the request.

When you analyze these records at scale, you can see patterns that are invisible everywhere else: which pages Googlebot crawls most often, which URLs return errors, where crawl budget is being wasted, and which pages are being completely ignored despite being important to your business.

As we covered in the technical SEO guide for 2026, crawlability sits at the foundation of everything. If Google cannot efficiently crawl your site, none of the content or link work you do above that layer will reach its full potential.


Why Log File Analysis Matters More in 2026

Most SEOs know log file analysis is valuable in theory. Very few do it consistently. That gap is getting more expensive to ignore.

Google Search Console is not a replacement. GSC crawl stats are aggregated, sampled, and limited to Google’s crawlers. You cannot drill down to individual URLs with confidence, you cannot track trends at the page level, and you cannot see data for any crawler other than Googlebot.

AI bots are now a significant presence in your logs. GPTBot, ClaudeBot, Amazonbot, PerplexityBot, and CCBot are now regular visitors across nearly every category of website. These bots are not indexing your site for search results. They are training language models or powering AI answer engines like ChatGPT, Claude, and Perplexity. Your log files are currently one of the only reliable ways to see which AI systems are accessing your content, how frequently, and which pages they are targeting. This directly affects your AI Overview and AI search visibility.

Crawl budget problems are harder to spot without logs. For any site with thousands of URLs, whether that is a large ecommerce site, a news publisher, or a programmatic SEO project, you need to know how Google is actually spending its crawl budget. A technical audit might flag canonicalization issues or redirect chains, but only log files show you the actual crawl frequency distribution across your full URL inventory.

Crawler simulation tools are not enough. Legacy crawl tools and monitoring platforms only simulate what search engines see. They do not provide a true reflection of how search engines crawl. Log files come straight from the source.


What Is a Log File?

A log file is a plain text file stored on your web server that records every HTTP request the server receives, from both crawlers and real users.

Each line in the file represents one request. Here is what a typical Combined Log Format entry looks like:

 

66.249.66.1 - - [20/Mar/2026:14:02:05 +0000] "GET /technical-seo-for-website-performance-make-or-break-guide-2026 HTTP/1.1" 200 8452 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +https://www.google.com/bot.html)"

Breaking this down field by field:

Field Value What It Means
IP Address 66.249.66.1 Source of the request (verify against Google’s IP ranges)
Timestamp 20/Mar/2026:14:02:05 Exact time of the request
HTTP Method GET Type of request
URL Path /technical-seo-for… The specific page requested
HTTP Version HTTP/1.1 Protocol used
Status Code 200 Server returned the page successfully
Response Size 8452 Bytes returned in the response
Referrer “-“ No referring URL in this case
User-Agent Googlebot/2.1 Identifies the crawler making the request

Log files record requests to everything: HTML pages, CSS files, JavaScript, images, fonts, and even URLs that no longer exist. For SEO analysis, you will filter this data down to search engine and AI bot requests to HTML pages only.

Common Log File Formats

Different servers and platforms output logs in slightly different formats. The most common ones you will encounter are Combined Log Format (Apache default), W3C Extended Log Format (IIS default), Amazon Classic Load Balancer format, and NGINX access log format. Most analysis tools support Combined Log Format natively, so confirm your server’s output format before importing into any tool.


How to Access Your Log Files

Before you can analyze anything, you need the actual files. Where you find them depends entirely on your hosting and infrastructure setup.

Self-hosted servers (Apache or NGINX): Log files are stored directly on the server. Apache typically saves them at /var/log/apache2/access.log and NGINX at /var/log/nginx/access.log. Access via SSH or download using an FTP client like FileZilla.

Managed WordPress hosts (WP Engine, Kinsta): Most managed hosts provide log access through their dashboard or via SFTP. Check under sections labeled “Logs” or “Developer Tools.” If access is unclear, contact support and ask specifically for raw HTTP access logs.

CDN logs (Cloudflare, AWS CloudFront, Akamai): This is where it gets more complex. If you are using a CDN, the CDN is receiving requests before they hit your origin server. Your origin logs will be incomplete or empty for most traffic.

→ Cloudflare Logpush is available on the Enterprise plan and pushes HTTP request logs to a storage destination like AWS S3, Google Cloud Storage, or Azure Blob.

→ AWS CloudFront standard logging is available across all tiers. Logs go to an S3 bucket you specify and can be queried efficiently using AWS Athena.

→ Akamai offers log delivery through their DataStream product.

Generating logs with edge workers when CDN native logging is unavailable: If you are on Cloudflare but not on the Enterprise plan, you can use Cloudflare Workers to generate log data on the fly. Workers are scripts that run at the CDN’s edge server and can intercept every request to capture the URL, user-agent, timestamp, and IP before writing that data to KV storage or pushing it to an external logging service.

Beyond logging, edge workers can be used for a wider range of SEO applications: dynamically adjusting robots.txt rules, implementing redirects, setting X-Robots-Tag headers, and modifying meta information without touching your origin server. They add infrastructure complexity and require careful testing, but for teams on CDN setups without native log access, they are a practical path to getting the data you need.

Important on retention: Log files are often kept for only 7 to 30 days by default. Set up automated archiving immediately. For most use cases, 3 months of data is sufficient. For migration analysis or seasonal trend work, you may need 6 to 12 months.


Filtering Your Logs for SEO Analysis

Raw log files include everything: asset requests, real user traffic, and every kind of bot imaginable. For SEO analysis, you only want search engine and AI crawler records for HTML pages.

Step 1: Filter by user-agent. Keep only records where the user-agent contains identifiers for the crawlers you are analyzing.

Crawler User-Agent Identifier
Googlebot (desktop) Googlebot/2.1
Bingbot bingbot/2.0
GPTBot (OpenAI) GPTBot/1.0
ClaudeBot (Anthropic) ClaudeBot
Amazonbot Amazonbot
PerplexityBot PerplexityBot
CCBot (Common Crawl) CCBot
Google-Extended Google-Extended

Step 2: Verify bot identity via IP address. User-agent strings can be spoofed. Always cross-reference Googlebot IP addresses against Google’s published IP ranges using a reverse DNS lookup. A genuine Googlebot request will resolve to a hostname ending in googlebot.com or google.com.

Step 3: Remove static asset requests. Filter out records for file types like .css, .js, .jpg, .png, .woff, .svg, and .ico. Focus on clean URL paths representing actual HTML pages.

Step 4: Normalize and import. Ensure timestamps are in a consistent format and import the cleaned data into your analysis tool or environment.


The Most Important Use Cases for Log File Analysis

1. Understanding How Google Allocates Your Crawl Budget

Crawl budget is the number of pages Googlebot will crawl on your site within a given time window. It is not unlimited, and for large sites, how that budget gets used has a direct impact on which pages get indexed and how quickly.

Pull the crawl frequency data for your full URL inventory and sort by request count. What you will typically find on any established site is that a small number of URLs are getting crawled constantly while large sections of the site are barely touched.

The diagnostic questions to answer:

→ Are your highest-revenue or most strategically important pages in the top crawled URLs?

→ Are parameter URL variants consuming significant crawl budget? (URLs like ?sort=price&page=47 or ?ref=123)

→ Are entire site sections being ignored?

→ What is the ratio of page URL requests versus asset requests?

Next Steps

If parameter URLs are consuming significant crawl budget, find where Google learned about those URLs, remove internal links to them, and add Disallow rules in robots.txt for the parameter patterns. Use canonical tags to consolidate duplicate parameter versions back to the clean URL.

If important pages have low crawl frequency, the most effective fix is improving internal linking from frequently crawled pages toward the under-crawled ones. Your main navigation, footer, and hub pages carry the most internal link authority. This is directly connected to how you build your semantic content network.

If assets are being over-crawled, review your Cache-Control HTTP headers. If you are telling Google to cache assets for only one hour but those assets change quarterly, increase the max-age significantly.


2. Improving SEO Forecasting with Crawl Data

This is one of the most overlooked applications of log file analysis and one of the most practically valuable for teams that report on SEO results.

By tracking exactly when Googlebot first crawls newly published content, and combining that data with when the content first appears in Search Console impressions and when it starts driving traffic, you can build a reliable crawl-to-ranking timeline specific to your site.

A pattern you might find after three to six months of tracking:

Event Timing
Content published Day 0
First Googlebot crawl Day 1 to Day 3
First appearance in GSC impressions Day 5 to Day 8
First organic traffic Day 7 to Day 14
Stable ranking reached Day 30 to Day 60

Segment this by content type. Product pages, blog posts, landing pages, and category pages will each have different crawl-to-traffic timelines on your site. When you have this data, SEO forecasts become far more accurate and evidence-based.

Next Steps

If Googlebot is slow to discover new content, check how often your XML sitemap is being crawled in the logs. If it is only a few times per month, split your sitemap and create a dedicated one for newly published content submitted separately to GSC. Add links to new content from your most frequently crawled pages to accelerate discovery.

If the gap between first crawl and first ranking is longer than your benchmarks, the issue is likely content quality or topical authority rather than crawlability. That is a signal to invest in topical depth rather than technical fixes.


3. Finding Crawl Errors Before They Become Indexing Problems

Your logs show every HTTP status code Googlebot received in real time. This means you can catch 4xx and 5xx errors before they show up as indexing drops in Search Console weeks later.

The most important error patterns to monitor:

404 errors on pages that should exist: If Googlebot is repeatedly requesting a URL and getting a 404, and that URL is in your sitemap or has internal or external links pointing to it, that is an immediate fix priority.

5xx server errors: These indicate your server failed to respond properly. If 5xx errors are happening during Googlebot crawl sessions, Googlebot may reduce its crawl rate or begin deindexing affected pages. Most 5xx errors are application or infrastructure issues. Flag them for your development or DevOps team with the specific URLs and timestamps from your log data.

Redirect chains: A 301 that leads to another 301 before reaching a 200 response wastes crawl budget and dilutes link equity at each step.

Inconsistent status codes: If the same URL appears in your logs with both 200 and 404 responses on different occasions, you likely have a CMS issue that is intermittently serving the wrong response.

Next Steps

For 404 errors: decide whether to rebuild the page, redirect it to a relevant alternative, or remove internal links pointing to it. Which fix is right depends on whether the page ever had value and whether it has external links.

For redirect chains: prioritize fixing chains in your main navigation and footer first since those links appear on every page, then work through internal body content and sitemaps.


4. Verifying Alignment Between Crawl Priority and Business Priority

Take your top 50 most frequently crawled pages from your log data. Compare that list against your top 50 most important pages by business value, whether that is revenue, lead generation, or strategic content priority.

How much overlap is there?

On many older sites with accumulated legacy content, this overlap is surprisingly low. Googlebot ends up spending most of its time on old blog posts, archive pages, tag pages, or legacy URLs that are no longer central to the business. This is a signal problem: Google follows the signals your site sends through internal links, sitemaps, and crawl history.

Next Steps

Restructure your internal link architecture to put more emphasis on the pages that matter most. Review your main navigation, sidebar, and footer links. Every link in the main navigation is a vote for that page’s importance.

Remove low-value pages from your XML sitemaps or move them to a lower-priority sitemap. If certain site sections consistently show low business value and high crawl volume, consider adding Disallow rules in robots.txt for those sections.


5. Discovering Orphan Pages

Orphan pages are pages on your site that have no internal links pointing to them. Crawl tools cannot find them because they follow links. Googlebot, however, often still knows about them from old sitemaps, external links, or previous crawl sessions, and continues requesting them for months or years.

How to find orphan pages using log files:

1. Export all URLs that Googlebot requested in your log data.

2. Run a fresh site crawl using Screaming Frog to get all internally linked URLs.

3. Cross-reference the two lists. URLs in the logs but not in your crawl map are orphan candidates.

4. Filter out URLs that appear in your XML sitemap to narrow to true orphans.

Next Steps

Evaluate each orphan page. Is it getting organic traffic? Does it have external links? Does it contain useful content?

If you want to keep the page, add internal links to it from related pages and integrate it back into your site structure. If it is outdated or low-value, redirect it to the most relevant current page or remove it. Leaving orphan pages in limbo where Googlebot crawls them with no clear context sends weak signals about your site structure, which connects directly to entity and site authority signals.


6. Post-Migration Monitoring

After a site migration, your logs become your most important diagnostic tool. Do not wait for Search Console data, which typically lags by days or weeks.

Confirm immediately after go-live:

→ Googlebot is discovering and crawling your new URLs.

→ Old URLs are returning 301 redirects to the correct new destinations, not 404 errors or incorrect 302 redirects.

→ New URLs are returning clean 200 status codes, not intermittent 5xx errors.

→ Crawl frequency on your most important pages is consistent with pre-migration levels.

Next Steps

Export crawl frequency data for your top 100 pages from the four weeks before the migration. Export the same data for four weeks after. Compare side by side. If crawl frequency dropped significantly on key pages or error rates increased, investigate before it affects rankings.

If old URLs are returning 404 instead of 301, your redirect map has gaps. Pull the full list of 404-returning old URLs from post-migration logs and add the missing redirects immediately. Add a dedicated XML sitemap containing all new URLs and submit it to GSC to accelerate discovery.


7. Monitoring AI Bot Behavior

This use case barely existed two years ago and is now essential for any site thinking seriously about AI search visibility.

Bot Organization Purpose
GPTBot OpenAI ChatGPT training and retrieval
ClaudeBot Anthropic Claude model training
Amazonbot Amazon Alexa and AI product data
PerplexityBot Perplexity AI answer engine retrieval
CCBot Common Crawl Open dataset used by many LLMs
Google-Extended Google Gemini training data

Log file analysis lets you see which AI bots are crawling your site, how often, which pages they are targeting, and whether their behavior has changed over time.

If your content is being heavily crawled by AI bots, there is a higher probability it is being used as training data or as a retrieval source for AI-generated answers. Understanding which content these bots favor gives you strategic signal about what types of pages tend to get cited in AI responses.

A technique some advanced teams are using is creating “honeytrap” test pages: pages that are disallowed from traditional search engine crawlers in robots.txt but left accessible to AI bots. These pages contain unique, specific content. If that content later appears in AI-generated responses, it confirms those bots are ingesting and using the material. This is not mainstream yet but is one of the clearest ways to verify how your content feeds into the AI answer ecosystem.

Next Steps

Decide your AI bot policy based on your content strategy. If you want citations in AI answer engines, blocking AI bots works against that goal. If you want to prevent training data use without attribution, targeted robots.txt rules are your main tool. For more granular control, CDN-level rate limiting can throttle specific bots without fully blocking them.


Log File Analysis Use Cases: Quick Reference

Use Case What to Look For Primary Fix
Crawl budget waste High frequency on low-value URLs robots.txt rules, canonical tags, remove internal links
SEO forecasting Time from publish to first crawl by content type Sitemap structure, internal links to new content
Crawl errors 4xx and 5xx status codes Fix broken pages, update redirects, investigate server errors
Priority misalignment Important pages crawled infrequently Improve internal linking, restructure navigation, clean sitemaps
Orphan pages URLs in logs not found in site crawl Add internal links or redirect and remove
Post-migration Old URLs returning 404, new URLs not crawled Complete redirect map, submit new sitemap
AI bot monitoring AI crawler frequency and page targeting robots.txt directives, rate limiting decisions

Real-World Example: Crawl Budget Fix on an Ecommerce Site

An ecommerce site was experiencing a gradual decline in organic traffic despite no major content changes and no obvious errors in Search Console. Log file analysis revealed that Googlebot was spending significant crawl budget on redirect chains tied to out-of-stock product variants and parameter URLs generated by faceted navigation.

These URLs were eating into crawl budget that should have been going to core category pages. The CMS had not flagged these issues because the pages technically existed and returned responses.

The fix involved implementing canonical tags on parameter URLs, cleaning up legacy redirect chains, and adding robots.txt rules to block the most problematic parameter patterns. Within two months, crawl efficiency improved in the logs: Googlebot shifted focus toward the core category pages, crawl frequency on those pages increased, and organic traffic stabilized then grew.

This kind of problem is invisible without log file analysis. Search Console would have shown a crawl stats overview but not the specific URL-level patterns that revealed the root cause.


Common Mistakes with Log File Analysis

Analyzing without a specific question. Log files contain millions of records. Opening them without a hypothesis leads to hours of browsing that produces nothing actionable. Define what you are testing first, then use the data to answer it.

Treating it as a one-time task. Your site changes constantly and Googlebot adapts. Log file analysis is ongoing monitoring, not a one-off audit. Build it into your monthly workflow.

Assuming GSC is a substitute. GSC crawl stats are sampled, aggregated, and Google-only. They are a useful supplement, not a replacement for raw log data.

Not verifying bot identity. Any bot can claim to be Googlebot in its user-agent string. Always cross-reference IP addresses against Google’s published ranges for genuine verification.

Skipping log archiving. If you do not set up automated archiving, you will lose historical data precisely when you need it most, after a traffic drop or during a migration investigation.

Ignoring AI bots. If you are not filtering for AI crawler traffic in your logs, you have a growing blind spot in understanding how your content is being consumed beyond traditional search.


Tools for Log File Analysis

Screaming Frog Log File Analyser: A desktop tool purpose-built for SEO log analysis. Import log files directly and get reports on crawl frequency, status codes, and bot activity. Good for sites of moderate scale.

Semrush Log File Analyzer: In-platform log upload and analysis focused on Googlebot behavior. Practical if you are already working within Semrush for other SEO tasks.

Conductor Monitoring: Enterprise-oriented continuous log ingestion, including real-time via Cloudflare Workers. Better suited for large sites that need always-on monitoring rather than periodic manual uploads.

Python with Pandas: Full flexibility at any scale. Parsing log files programmatically gives you complete control over filtering, aggregation, and custom analysis. The right approach for high-traffic sites where log files are gigabytes in size.

Google BigQuery with Looker Studio: If your CDN or hosting setup can pipe logs into BigQuery, you can run SQL queries against your log data at any scale and build dashboards in Looker Studio for ongoing visualization. This is the standard setup for large publisher and ecommerce SEO teams.


Log File Analysis Checklist

Setup

→ Log files downloaded or accessible (confirm format: Combined Log, W3C Extended, etc.)

→ Files cover at least 30 days (90 days preferred)

→ Filtered to search engine and AI crawler records only

→ Googlebot identity verified via reverse DNS lookup

Crawl Budget Review

→ Top 50 most crawled URLs identified

→ Crawl priority list compared against business priority list

→ Parameter URL crawl volume assessed

→ Asset crawl frequency reviewed against Cache-Control settings

→ Ratio of page requests to asset requests calculated

Error Detection

→ 4xx URLs exported and reviewed for internal/external link presence

→ 5xx patterns identified and flagged for developer investigation

→ Redirect chains identified and internal links queued for update

→ Inconsistent status code URLs flagged

Crawl Forecasting

→ Crawl-to-first-impression timeline documented for recent content

→ Timeline patterns identified per content type

→ XML sitemap crawl frequency confirmed as adequate

Orphan Page Discovery

→ Full Googlebot URL list exported from logs

→ Fresh site crawl run to get internally linked URL inventory

→ Cross-reference completed to identify orphan candidates

→ Each orphan page reviewed and assigned action

AI Bot Analysis

→ AI bot user-agents identified and separated in logs

→ Crawl frequency by AI bot assessed

→ Pages heavily targeted by AI bots noted for content strategy input

→ robots.txt directives for AI bots reviewed

Post-Migration (if applicable)

→ Pre-migration baseline crawl data archived

→ Post-migration comparison completed against baseline

→ Legacy URLs returning 404 identified and redirects added

→ New URL crawl frequency confirmed as adequate


FAQ: Log File Analysis for SEO

Q: How often should I analyze my log files?

For large sites, continuous log monitoring is ideal. For most sites, a dedicated analysis session once a month is a reasonable baseline. Always run a focused analysis immediately after any significant site change: a migration, a large batch of new content, a CMS update, or a major technical change.

Q: My site is small. Is log file analysis worth the effort?

Yes, even for smaller sites. Log files are the only way to see how Google actually crawls your site rather than how you think it does. The analysis takes less time on a small site, and you will often find crawl issues that are completely invisible in Search Console.

Q: Is Google Search Console’s Crawl Stats report enough?

Not for serious technical SEO work. GSC crawl stats give a useful high-level summary but the data is sampled and aggregated. You cannot drill down to individual URLs reliably, you cannot track page-level trends over time, and you cannot analyze any crawler other than Googlebot.

Q: Should I block AI bots in my robots.txt?

That depends on your content strategy. If you want your content cited by AI answer engines, blocking works against that goal. If you want to prevent AI systems from using your content without attribution, robots.txt directives and CDN-level rate limiting are your primary tools. Log file analysis is the first step regardless: understand which AI bots are visiting and what they are reading before making a blocking decision.

Q: What log file format do I need?

Most analysis tools accept Combined Log Format (Apache default) or W3C Extended format (IIS default). Confirm your server or CDN output format before importing into any tool.

Q: How do I find orphan pages using log files?

Export the full list of URLs that appeared in Googlebot log entries. Run a separate crawl of your site using Screaming Frog to get all internally linked URLs. Any URL that appears in the Googlebot logs but not in the crawl output is a likely orphan. Cross-reference with your XML sitemap to narrow the list further.

Q: Can log files help with JavaScript SEO?

Yes. If Googlebot is requesting your base URLs but not the API endpoints or resources triggered by JavaScript, that is a rendering gap. Comparing URLs in your logs against the full list of URLs that should exist after JS rendering helps identify content that bots are missing entirely.

Q: What if I use a CDN and cannot access full logs?

If you are on Cloudflare Enterprise, use Logpush. If not, use Cloudflare Workers to generate logs at the edge. AWS CloudFront standard logging is available across all tiers. For Akamai, the DataStream product handles log delivery. If none of these options are available, partial analysis using GSC crawl stats plus a crawl tool is a reasonable fallback, but log access should be a priority infrastructure requirement for any serious SEO operation.


Final Thoughts

Log file analysis for SEO gives you ground truth rather than approximations. Every other tool shows you a model of how crawlers interact with your site. Log files show you exactly what happened, in exact detail, with no sampling, no aggregation, and no intermediary.

In 2026, that data is more valuable than it has ever been. Crawl budget, AI bot behavior, post-migration verification, and orphan page management are all problems that exist in your logs long before they show up anywhere else.

Most of your competitors are not doing this consistently. They are relying on Search Console and crawl tools and making decisions based on incomplete information. That gap is your opportunity.

Start with the basics: download your logs, filter to crawler traffic, and work through the use cases above one at a time. Build log review into your monthly workflow and run a dedicated analysis after every significant site change. As the patterns become familiar, the analysis gets faster and the fixes get more targeted.

Log file analysis does not replace your other SEO tools. It completes them.

Tanishka Vats

Lead Content Writer | HM Digital Solutions Results-driven content writer with over five years of experience and a background in Economics (Hons), with expertise in using data-driven storytelling and strategic brand positioning. I have experience managing live projects across Finance, B2B SaaS, Technology, and Healthcare, with content ranging from SEO-driven blogs and website copy to case studies, whitepapers, and corporate communications. Proficient in using SEO tools like Ahrefs and SEMrush, and content management systems like WordPress and Webflow. Experienced content writer with a proven track record of creating audience-centric content that drives significant results on website traffic, engagement rates, and lead conversions. Highly adaptable and effective communicator with the ability to work under deadlines.

Write a comment

Your email address will not be published. Required fields are marked *